2025-11-13 09:11:05,521 [ 180668 ] INFO : ClickHouse root is not set. Will use /home/ubuntu/_work/ClickHouse/ClickHouse (runner:53, check_args_and_update_paths) 2025-11-13 09:11:05,521 [ 180668 ] INFO : Cases dir is not set. Will use /home/ubuntu/_work/ClickHouse/ClickHouse/tests/integration (runner:79, check_args_and_update_paths) 2025-11-13 09:11:05,521 [ 180668 ] INFO : utils dir is not set. Will use /home/ubuntu/_work/ClickHouse/ClickHouse/utils (runner:90, check_args_and_update_paths) 2025-11-13 09:11:05,521 [ 180668 ] INFO : base_configs_dir: /home/ubuntu/_work/ClickHouse/ClickHouse/programs/server, binary: /home/ubuntu/_work/_temp/test/build/clickhouse, cases_dir: /home/ubuntu/_work/ClickHouse/ClickHouse/tests/integration (runner:92, check_args_and_update_paths) clickhouse_integration_tests_volume Running pytest container as: 'docker run --rm --name clickhouse_integration_tests_mmxjdk --privileged --dns-search='.' --memory=30709039104 --security-opt seccomp=unconfined --cap-add=SYS_PTRACE --volume=/home/ubuntu/_work/_temp/test/build/clickhouse:/clickhouse --volume=/home/ubuntu/_work/ClickHouse/ClickHouse/programs/server:/clickhouse-config --volume=/home/ubuntu/_work/ClickHouse/ClickHouse/tests/integration:/ClickHouse/tests/integration --volume=/home/ubuntu/_work/ClickHouse/ClickHouse/utils/backupview:/ClickHouse/utils/backupview --volume=/home/ubuntu/_work/ClickHouse/ClickHouse/utils/grpc-client/pb2:/ClickHouse/utils/grpc-client/pb2 --volume=/run:/run/host:ro --volume=clickhouse_integration_tests_volume:/var/lib/docker -e DOCKER_DOTNET_CLIENT_TAG=11de0b29a15d -e DOCKER_HELPER_TAG=5dc43a6382f0 -e DOCKER_BASE_TAG=5ccda723c1fc -e DOCKER_KERBEROS_KDC_TAG=9391ecdee8d7 -e DOCKER_MYSQL_GOLANG_CLIENT_TAG=9bec2a638e6e -e DOCKER_MYSQL_JAVA_CLIENT_TAG=766bff31cfe4 -e DOCKER_MYSQL_JS_CLIENT_TAG=41ba7c2ec2a1 -e DOCKER_MYSQL_PHP_CLIENT_TAG=88be89c1e3b6 -e DOCKER_NGINX_DAV_TAG=b55ac9cd7519 -e DOCKER_POSTGRESQL_JAVA_CLIENT_TAG=a4eff5c7f4d6 -e DOCKER_PYTHON_BOTTLE_TAG=d862517635bf -e DOCKER_CLIENT_TIMEOUT=300 -e COMPOSE_HTTP_TIMEOUT=600 -e PYTHONUNBUFFERED=1 -e PYTEST_ADDOPTS="--dist=loadfile -n 10 -rfEps --run-id=1 --color=no --durations=0 --report-log=parallel0_1.jsonl --report-log-exclude-logs-on-passed-tests test_attach_partition_using_copy/test.py::test_all_replicated test_attach_partition_using_copy/test.py::test_both_mergetree test_attach_partition_using_copy/test.py::test_not_work_on_different_disk test_attach_partition_using_copy/test.py::test_only_destination_replicated 'test_cow_policy/test.py::test_cow_policy[cow_policy_multi_disk]' 'test_cow_policy/test.py::test_cow_policy[cow_policy_multi_volume]' -vvv " altinityinfra/integration-tests-runner:226bfaf75ac1 '. Start tests ============================= test session starts ============================== platform linux -- Python 3.10.12, pytest-7.4.4, pluggy-1.5.0 -- /usr/bin/python3 cachedir: .pytest_cache Test order randomisation NOT enabled. Enable with --random-order or --random-order-bucket= rootdir: /ClickHouse/tests/integration configfile: pytest.ini plugins: timeout-2.3.1, repeat-0.9.3, order-1.0.0, reportlog-0.4.0, xdist-3.5.0, random-order-1.1.1 timeout: 900.0s timeout method: signal timeout func_only: False created: 10/10 workers 10 workers [6 items] scheduling tests via LoadFileScheduling test_attach_partition_using_copy/test.py::test_all_replicated test_cow_policy/test.py::test_cow_policy[cow_policy_multi_disk] [gw3] [ 16%] FAILED test_cow_policy/test.py::test_cow_policy[cow_policy_multi_disk] test_cow_policy/test.py::test_cow_policy[cow_policy_multi_volume] [gw0] [ 33%] FAILED test_attach_partition_using_copy/test.py::test_all_replicated test_attach_partition_using_copy/test.py::test_both_mergetree [gw3] [ 50%] FAILED test_cow_policy/test.py::test_cow_policy[cow_policy_multi_volume] [gw0] [ 66%] FAILED test_attach_partition_using_copy/test.py::test_both_mergetree test_attach_partition_using_copy/test.py::test_not_work_on_different_disk [gw0] [ 83%] FAILED test_attach_partition_using_copy/test.py::test_not_work_on_different_disk test_attach_partition_using_copy/test.py::test_only_destination_replicated [gw0] [100%] FAILED test_attach_partition_using_copy/test.py::test_only_destination_replicated =================================== FAILURES =================================== ____________________ test_cow_policy[cow_policy_multi_disk] ____________________ [gw3] linux -- Python 3.10.12 /usr/bin/python3 start_cluster = storage_policy = 'cow_policy_multi_disk' @pytest.mark.parametrize("storage_policy", ["cow_policy_multi_disk", "cow_policy_multi_volume"]) def test_cow_policy(start_cluster, storage_policy): try: > node.query_with_retry( f""" ATTACH TABLE uk_price_paid UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7' ( price UInt32, date Date, postcode1 LowCardinality(String), postcode2 LowCardinality(String), type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4), is_new UInt8, duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2), addr1 String, addr2 String, street LowCardinality(String), locality LowCardinality(String), town LowCardinality(String), district LowCardinality(String), county LowCardinality(String) ) ENGINE = MergeTree ORDER BY (postcode1, postcode2, addr1, addr2) SETTINGS storage_policy = '{storage_policy}' """, timeout=60, retry_count=3, ) test_cow_policy/test.py:24: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = sql = "\n ATTACH TABLE uk_price_paid UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7'\n (\n ...R BY (postcode1, postcode2, addr1, addr2)\n SETTINGS storage_policy = 'cow_policy_multi_disk'\n " stdin = None, timeout = 60, settings = None, user = None, password = None database = None, host = None, ignore_error = False, retry_count = 3 sleep_time = 0.5 check_callback = at 0x7f6180294040> parse = False def query_with_retry( self, sql, stdin=None, timeout=None, settings=None, user=None, password=None, database=None, host=None, ignore_error=False, retry_count=20, sleep_time=0.5, check_callback=lambda x: True, parse=False, ): # logging.debug(f"Executing query {sql} on {self.name}") result = None exception_msg = "" for i in range(retry_count): try: result = self.query( sql, stdin=stdin, timeout=timeout, settings=settings, user=user, password=password, database=database, host=host, ignore_error=ignore_error, parse=parse, ) if check_callback(result): return result time.sleep(sleep_time) except QueryRuntimeException as ex: exception_msg = f"{type(ex).__name__}: {str(ex)}" # Container is down, this is likely due to server crash. if "No route to host" in str(ex): raise time.sleep(sleep_time) except Exception as ex: # logging.debug("Retry {} got exception {}".format(i + 1, ex)) exception_msg = f"{type(ex).__name__}: {str(ex)}" time.sleep(sleep_time) if result is not None: return result > raise Exception(f"Can't execute query {sql}\n{exception_msg}") E Exception: Can't execute query E ATTACH TABLE uk_price_paid UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7' E ( E price UInt32, E date Date, E postcode1 LowCardinality(String), E postcode2 LowCardinality(String), E type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4), E is_new UInt8, E duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2), E addr1 String, E addr2 String, E street LowCardinality(String), E locality LowCardinality(String), E town LowCardinality(String), E district LowCardinality(String), E county LowCardinality(String) E ) E ENGINE = MergeTree E ORDER BY (postcode1, postcode2, addr1, addr2) E SETTINGS storage_policy = 'cow_policy_multi_disk' E E QueryRuntimeException: Client failed! Return code: 198, stderr: Received exception from server (version 25.3.8): E Code: 198. DB::Exception: Received from 172.16.1.2:9000. DB::NetException. DB::NetException: Not found address of host: raw.githubusercontent.com: while loading disk metadata. Stack trace: E E 0. ./contrib/llvm-project/libcxx/include/__exception/exception.h:113: Poco::Exception::Exception(String const&, int) @ 0x0000000020e10280 E 1. ./build_docker/./src/Common/Exception.cpp:108: DB::Exception::Exception(DB::Exception::MessageMasked&&, int, bool) @ 0x0000000010b395b4 E 2. ./src/Common/Exception.h:112: DB::NetException::NetException(int, FormatStringHelperImpl::type>, String const&) @ 0x0000000010af5e99 E 3. ./build_docker/./src/Common/DNSResolver.cpp:113: DB::(anonymous namespace)::hostByName(String const&) @ 0x0000000010af1803 E 4. ./build_docker/./src/Common/DNSResolver.cpp:138: DB::DNSResolver::getResolvedIPAdressessWithFiltering(String const&) @ 0x0000000010aefdd3 E 5. ./build_docker/./src/Common/DNSResolver.cpp:256: DB::DNSResolver::resolveIPAddressWithCache(String const&) @ 0x0000000010af03b9 E 6. ./build_docker/./src/Common/DNSResolver.cpp:276: DB::DNSResolver::resolveHostAllInOriginOrder(String const&) @ 0x0000000010af0af0 E 7. ./build_docker/./src/Common/HostResolvePool.cpp:54: std::vector> std::__function::__policy_invoker> (String const&)>::__call_impl[abi:ne190107]> (String const&)>>(std::__function::__policy_storage const*, String const&) @ 0x0000000010fc2c29 E 8. ./contrib/llvm-project/libcxx/include/__functional/function.h:716: ? @ 0x0000000010fc0a8f E 9. ./build_docker/./src/Common/HostResolvePool.cpp:66: DB::HostResolver::HostResolver(std::function> (String const&)>&&, String, Poco::Timespan) @ 0x0000000010fc089f E 10. ./build_docker/./src/Common/HostResolvePool.cpp:53: DB::HostResolver::HostResolver(String, Poco::Timespan) @ 0x0000000010fc053e E 11. ./src/Common/HostResolvePool.h:62: std::shared_ptr DB::HostResolver::create(String const&)::make_shared_enabler::make_shared_enabler(String const&) @ 0x0000000010fc5b23 E 12. ./contrib/llvm-project/libcxx/include/__memory/construct_at.h:41: std::shared_ptr DB::HostResolver::create(String const&)::make_shared_enabler> std::allocate_shared[abi:ne190107] DB::HostResolver::create(String const&)::make_shared_enabler, std::allocator DB::HostResolver::create(String const&)::make_shared_enabler>, String const&, 0>(std::allocator DB::HostResolver::create(String const&)::make_shared_enabler> const&, String const&) @ 0x0000000010fc5826 E 13. ./contrib/llvm-project/libcxx/include/__memory/shared_ptr.h:851: DB::HostResolversPool::getResolver(String const&) @ 0x0000000010fc2af1 E 14. ./build_docker/./src/Common/HTTPConnectionPool.cpp:671: DB::EndpointConnectionPool::prepareNewConnection(DB::ConnectionTimeouts const&, unsigned long*) @ 0x0000000010fb4fb6 E 15. ./build_docker/./src/Common/HTTPConnectionPool.cpp:590: DB::EndpointConnectionPool::getConnection(DB::ConnectionTimeouts const&, unsigned long*) @ 0x0000000010fb4218 E 16. ./build_docker/./src/IO/HTTPCommon.cpp:63: DB::makeHTTPSession(DB::HTTPConnectionGroupType, Poco::URI const&, DB::ConnectionTimeouts const&, DB::ProxyConfiguration const&, unsigned long*) @ 0x0000000010fccd2d E 17. ./build_docker/./src/IO/ReadWriteBufferFromHTTP.cpp:267: DB::ReadWriteBufferFromHTTP::callImpl(Poco::Net::HTTPResponse&, String const&, std::optional const&, bool) const @ 0x0000000013cfa745 E 18. ./build_docker/./src/IO/ReadWriteBufferFromHTTP.cpp:285: DB::ReadWriteBufferFromHTTP::callWithRedirects(Poco::Net::HTTPResponse&, String const&, std::optional const&) @ 0x0000000013cfa9f4 E 19. ./build_docker/./src/IO/ReadWriteBufferFromHTTP.cpp:408: DB::ReadWriteBufferFromHTTP::initialize() @ 0x0000000013cfb4a0 E 20. ./build_docker/./src/IO/ReadWriteBufferFromHTTP.cpp:472: void std::__function::__policy_invoker::__call_impl[abi:ne190107]>(std::__function::__policy_storage const*) @ 0x0000000013cfe657 E 21. ./contrib/llvm-project/libcxx/include/__functional/function.h:716: ? @ 0x0000000013cf7eb9 E 22. ./build_docker/./src/IO/ReadWriteBufferFromHTTP.cpp:465: DB::ReadWriteBufferFromHTTP::nextImpl() @ 0x0000000013cfcb61 E 23. DB::ReadBuffer::next() @ 0x000000000891fc7b E 24. ./src/IO/ReadBuffer.h:96: DB::WebObjectStorage::loadFiles(String const&, std::unique_lock const&) const @ 0x00000000179744d9 E 25. ./build_docker/./src/Disks/ObjectStorages/Web/WebObjectStorage.cpp:225: DB::WebObjectStorage::tryGetFileInfo(String const&) const @ 0x000000001797795a E 26. ./build_docker/./src/Disks/ObjectStorages/Web/WebObjectStorage.cpp:185: DB::WebObjectStorage::tryGetFileInfo(String const&) const @ 0x000000001797737a E 27. ./build_docker/./src/Disks/ObjectStorages/Web/MetadataStorageFromStaticFilesWebServer.cpp:106: DB::MetadataStorageFromStaticFilesWebServer::getStorageObjectsIfExist(String const&) const @ 0x0000000017971925 E 28. ./build_docker/./src/Disks/ObjectStorages/DiskObjectStorage.cpp:785: DB::DiskObjectStorage::readFileIfExists(String const&, DB::ReadSettings const&, std::optional, std::optional) const @ 0x00000000178d7f12 E 29. ./build_docker/./src/Storages/MergeTree/MergeTreeData.cpp:380: DB::MergeTreeData::initializeDirectoriesAndFormatVersion(String const&, bool, String const&, bool) @ 0x000000001b300bdd E 30. ./build_docker/./src/Storages/StorageMergeTree.cpp:159: DB::StorageMergeTree::StorageMergeTree(DB::StorageID const&, String const&, DB::StorageInMemoryMetadata const&, DB::LoadingStrictnessLevel, std::shared_ptr, String const&, DB::MergeTreeData::MergingParams const&, std::unique_ptr>) @ 0x000000001b7d5bc7 E 31. ./contrib/llvm-project/libcxx/include/__memory/construct_at.h:41: std::shared_ptr std::allocate_shared[abi:ne190107], DB::StorageID const&, String const&, DB::StorageInMemoryMetadata&, DB::LoadingStrictnessLevel const&, std::shared_ptr&, String&, DB::MergeTreeData::MergingParams&, std::unique_ptr>, 0>(std::allocator const&, DB::StorageID const&, String const&, DB::StorageInMemoryMetadata&, DB::LoadingStrictnessLevel const&, std::shared_ptr&, String&, DB::MergeTreeData::MergingParams&, std::unique_ptr>&&) @ 0x000000001b7d54e4 E . (DNS_ERROR) E (query: ATTACH TABLE uk_price_paid UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7' E ( E price UInt32, E date Date, E postcode1 LowCardinality(String), E postcode2 LowCardinality(String), E type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4), E is_new UInt8, E duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2), E addr1 String, E addr2 String, E street LowCardinality(String), E locality LowCardinality(String), E town LowCardinality(String), E district LowCardinality(String), E county LowCardinality(String) E ) E ENGINE = MergeTree E ORDER BY (postcode1, postcode2, addr1, addr2) E SETTINGS storage_policy = 'cow_policy_multi_disk' E ) helpers/cluster.py:3712: Exception ---------------------------- Captured stdout setup ----------------------------- Copy common default production configuration from /clickhouse-config. Files: config.xml, users.xml ------------------------------ Captured log setup ------------------------------ 2025-11-13 09:11:12.126000 [ 637 ] DEBUG : Command:[docker ps | wc -l] (cluster.py:121, run_and_check) 2025-11-13 09:11:12.149000 [ 637 ] DEBUG : Stdout:1 (cluster.py:145, run_and_check) 2025-11-13 09:11:12.150000 [ 637 ] DEBUG : No running containers (conftest.py:95, cleanup_environment) 2025-11-13 09:11:12.150000 [ 637 ] DEBUG : Pruning Docker networks (conftest.py:97, cleanup_environment) 2025-11-13 09:11:12.150000 [ 637 ] DEBUG : Command:[docker network prune --force] (cluster.py:121, run_and_check) 2025-11-13 09:11:12.173000 [ 637 ] DEBUG : Command:[sysctl net.ipv4.ip_local_port_range='55000 65535'] (cluster.py:121, run_and_check) 2025-11-13 09:11:12.176000 [ 637 ] DEBUG : Stdout:net.ipv4.ip_local_port_range = 55000 65535 (cluster.py:145, run_and_check) 2025-11-13 09:11:12.177000 [ 637 ] INFO : Running tests in /ClickHouse/tests/integration/test_cow_policy/test.py (cluster.py:2738, start) 2025-11-13 09:11:12.177000 [ 637 ] DEBUG : Cluster start called. is_up=False (cluster.py:2745, start) 2025-11-13 09:11:12.200000 [ 637 ] DEBUG : Docker networks for project roottestcowpolicy-gw3 are NETWORK ID NAME DRIVER SCOPE (cluster.py:830, print_all_docker_pieces) 2025-11-13 09:11:12.226000 [ 637 ] DEBUG : Docker containers for project roottestcowpolicy-gw3 are CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES (cluster.py:838, print_all_docker_pieces) 2025-11-13 09:11:12.253000 [ 637 ] DEBUG : Docker volumes for project roottestcowpolicy-gw3 are DRIVER VOLUME NAME (cluster.py:846, print_all_docker_pieces) 2025-11-13 09:11:12.254000 [ 637 ] DEBUG : Cleanup called (cluster.py:851, cleanup) 2025-11-13 09:11:12.278000 [ 637 ] DEBUG : Docker networks for project roottestcowpolicy-gw3 are NETWORK ID NAME DRIVER SCOPE (cluster.py:830, print_all_docker_pieces) 2025-11-13 09:11:12.304000 [ 637 ] DEBUG : Docker containers for project roottestcowpolicy-gw3 are CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES (cluster.py:838, print_all_docker_pieces) 2025-11-13 09:11:12.332000 [ 637 ] DEBUG : Docker volumes for project roottestcowpolicy-gw3 are DRIVER VOLUME NAME (cluster.py:846, print_all_docker_pieces) 2025-11-13 09:11:12.332000 [ 637 ] DEBUG : Command:[docker container list --all --filter name='^/roottestcowpolicy-gw3-.*-1$' --format '{{.ID}}:{{.Names}}'] (cluster.py:121, run_and_check) 2025-11-13 09:11:12.353000 [ 637 ] DEBUG : Unstopped containers: {} (cluster.py:865, cleanup) 2025-11-13 09:11:12.354000 [ 637 ] DEBUG : No running containers for project: roottestcowpolicy-gw3 (cluster.py:879, cleanup) 2025-11-13 09:11:12.354000 [ 637 ] DEBUG : Trying to prune unused networks... (cluster.py:885, cleanup) 2025-11-13 09:11:12.379000 [ 637 ] DEBUG : Trying to prune unused images... (cluster.py:901, cleanup) 2025-11-13 09:11:12.379000 [ 637 ] DEBUG : Command:[docker image prune -f] (cluster.py:121, run_and_check) 2025-11-13 09:11:12.417000 [ 637 ] DEBUG : Stdout:Total reclaimed space: 0B (cluster.py:145, run_and_check) 2025-11-13 09:11:12.417000 [ 637 ] DEBUG : Images pruned (cluster.py:904, cleanup) 2025-11-13 09:11:12.418000 [ 637 ] DEBUG : Trying to prune unused volumes... (cluster.py:910, cleanup) 2025-11-13 09:11:12.418000 [ 637 ] DEBUG : Command:[docker volume ls | wc -l] (cluster.py:121, run_and_check) 2025-11-13 09:11:12.442000 [ 637 ] DEBUG : Stdout:1 (cluster.py:145, run_and_check) 2025-11-13 09:11:12.442000 [ 637 ] DEBUG : Volumes pruned: 1 (cluster.py:915, cleanup) 2025-11-13 09:11:12.442000 [ 637 ] DEBUG : Setup directory for instance: node (cluster.py:2758, start) 2025-11-13 09:11:12.443000 [ 637 ] DEBUG : Create directory for configuration generated in this helper (cluster.py:4628, create_dir) 2025-11-13 09:11:12.443000 [ 637 ] DEBUG : Create directory for common tests configuration (cluster.py:4633, create_dir) 2025-11-13 09:11:12.444000 [ 637 ] DEBUG : Copy common configuration from helpers (cluster.py:4653, create_dir) 2025-11-13 09:11:12.445000 [ 637 ] DEBUG : Generate and write macros file (cluster.py:4705, create_dir) 2025-11-13 09:11:12.445000 [ 637 ] DEBUG : Copy custom test config files ['/ClickHouse/tests/integration/test_cow_policy/configs/overrides.yaml'] to /ClickHouse/tests/integration/test_cow_policy/_instances-1-gw3/node/configs/config.d (cluster.py:4741, create_dir) 2025-11-13 09:11:12.446000 [ 637 ] DEBUG : Setup database dir /ClickHouse/tests/integration/test_cow_policy/_instances-1-gw3/node/database (cluster.py:4758, create_dir) 2025-11-13 09:11:12.446000 [ 637 ] DEBUG : Setup logs dir /ClickHouse/tests/integration/test_cow_policy/_instances-1-gw3/node/logs (cluster.py:4769, create_dir) 2025-11-13 09:11:12.447000 [ 637 ] DEBUG : Entrypoint cmd: ["clickhouse", "server", "--config-file=/etc/clickhouse-server/config.xml", "--log-file=/var/log/clickhouse-server/clickhouse-server.log", "--errorlog-file=/var/log/clickhouse-server/clickhouse-server.err.log", "--"] (cluster.py:4850, create_dir) 2025-11-13 09:11:12.447000 [ 637 ] DEBUG : Env {'ASAN_OPTIONS': 'use_sigaltstack=0', 'TSAN_OPTIONS': 'use_sigaltstack=0', 'CLICKHOUSE_WATCHDOG_ENABLE': '0', 'CLICKHOUSE_NATS_TLS_SECURE': '0', 'LLVM_PROFILE_FILE': '/var/lib/clickhouse/server_%h_%p_%m.profraw'} stored in /ClickHouse/tests/integration/test_cow_policy/_instances-1-gw3/.env (cluster.py:96, _create_env_file) 2025-11-13 09:11:12.448000 [ 637 ] DEBUG : Trying paths: ['/root/.docker/config.json', '/root/.dockercfg'] (config.py:21, find_config_file) 2025-11-13 09:11:12.448000 [ 637 ] DEBUG : No config file found (config.py:28, find_config_file) 2025-11-13 09:11:12.448000 [ 637 ] DEBUG : Trying paths: ['/root/.docker/config.json', '/root/.dockercfg'] (config.py:21, find_config_file) 2025-11-13 09:11:12.448000 [ 637 ] DEBUG : No config file found (config.py:28, find_config_file) 2025-11-13 09:11:12.460000 [ 637 ] DEBUG : http://localhost:None "GET /version HTTP/1.1" 200 826 (connectionpool.py:547, _make_request) 2025-11-13 09:11:12.461000 [ 637 ] DEBUG : Command:[docker compose --env-file /ClickHouse/tests/integration/test_cow_policy/_instances-1-gw3/.env --project-name roottestcowpolicy-gw3 --file /ClickHouse/tests/integration/test_cow_policy/_instances-1-gw3/node/docker-compose.yml pull] (cluster.py:121, run_and_check) 2025-11-13 09:11:22.895000 [ 637 ] DEBUG : Stderr: node Pulling (cluster.py:147, run_and_check) 2025-11-13 09:11:22.896000 [ 637 ] DEBUG : Stderr: node Pulled (cluster.py:147, run_and_check) 2025-11-13 09:11:22.896000 [ 637 ] DEBUG : ('Trying to create ClickHouse instance by command %s', 'docker compose --env-file /ClickHouse/tests/integration/test_cow_policy/_instances-1-gw3/.env --project-name roottestcowpolicy-gw3 --file /ClickHouse/tests/integration/test_cow_policy/_instances-1-gw3/node/docker-compose.yml up -d --no-recreate') (cluster.py:3139, start) 2025-11-13 09:11:22.896000 [ 637 ] DEBUG : Command:[docker compose --env-file /ClickHouse/tests/integration/test_cow_policy/_instances-1-gw3/.env --project-name roottestcowpolicy-gw3 --file /ClickHouse/tests/integration/test_cow_policy/_instances-1-gw3/node/docker-compose.yml up -d --no-recreate] (cluster.py:121, run_and_check) 2025-11-13 09:11:23.516000 [ 637 ] DEBUG : Stderr: Network roottestcowpolicy-gw3_default Creating (cluster.py:147, run_and_check) 2025-11-13 09:11:23.516000 [ 637 ] DEBUG : Stderr: Network roottestcowpolicy-gw3_default Created (cluster.py:147, run_and_check) 2025-11-13 09:11:23.516000 [ 637 ] DEBUG : Stderr: Container roottestcowpolicy-gw3-node-1 Creating (cluster.py:147, run_and_check) 2025-11-13 09:11:23.516000 [ 637 ] DEBUG : Stderr: Container roottestcowpolicy-gw3-node-1 Created (cluster.py:147, run_and_check) 2025-11-13 09:11:23.516000 [ 637 ] DEBUG : Stderr: Container roottestcowpolicy-gw3-node-1 Starting (cluster.py:147, run_and_check) 2025-11-13 09:11:23.516000 [ 637 ] DEBUG : Stderr: Container roottestcowpolicy-gw3-node-1 Started (cluster.py:147, run_and_check) 2025-11-13 09:11:23.517000 [ 637 ] DEBUG : ClickHouse instance created (cluster.py:3147, start) 2025-11-13 09:11:23.517000 [ 637 ] DEBUG : get_instance_ip instance_name=node (cluster.py:2005, get_instance_ip) 2025-11-13 09:11:23.519000 [ 637 ] DEBUG : http://localhost:None "GET /v1.46/containers/roottestcowpolicy-gw3-node-1/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-11-13 09:11:23.520000 [ 637 ] DEBUG : get_instance_ip instance_name=node (cluster.py:2015, get_instance_global_ipv6) 2025-11-13 09:11:23.522000 [ 637 ] DEBUG : http://localhost:None "GET /v1.46/containers/roottestcowpolicy-gw3-node-1/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-11-13 09:11:23.523000 [ 637 ] DEBUG : Waiting for ClickHouse start in node, ip: 172.16.1.2... (cluster.py:3155, start) 2025-11-13 09:11:23.526000 [ 637 ] DEBUG : http://localhost:None "GET /v1.46/containers/roottestcowpolicy-gw3-node-1/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-11-13 09:11:23.531000 [ 637 ] DEBUG : http://localhost:None "GET /v1.46/containers/e79598e80e0f6747fd06ffc28ebd7795028e5b5d6e21196565fdbc48682bf9bd/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-11-13 09:11:23.634000 [ 637 ] DEBUG : http://localhost:None "GET /v1.46/containers/e79598e80e0f6747fd06ffc28ebd7795028e5b5d6e21196565fdbc48682bf9bd/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-11-13 09:11:23.737000 [ 637 ] DEBUG : http://localhost:None "GET /v1.46/containers/e79598e80e0f6747fd06ffc28ebd7795028e5b5d6e21196565fdbc48682bf9bd/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-11-13 09:11:23.840000 [ 637 ] DEBUG : http://localhost:None "GET /v1.46/containers/e79598e80e0f6747fd06ffc28ebd7795028e5b5d6e21196565fdbc48682bf9bd/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-11-13 09:11:23.944000 [ 637 ] DEBUG : http://localhost:None "GET /v1.46/containers/e79598e80e0f6747fd06ffc28ebd7795028e5b5d6e21196565fdbc48682bf9bd/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-11-13 09:11:24.047000 [ 637 ] DEBUG : http://localhost:None "GET /v1.46/containers/e79598e80e0f6747fd06ffc28ebd7795028e5b5d6e21196565fdbc48682bf9bd/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-11-13 09:11:24.151000 [ 637 ] DEBUG : http://localhost:None "GET /v1.46/containers/e79598e80e0f6747fd06ffc28ebd7795028e5b5d6e21196565fdbc48682bf9bd/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-11-13 09:11:24.255000 [ 637 ] DEBUG : http://localhost:None "GET /v1.46/containers/e79598e80e0f6747fd06ffc28ebd7795028e5b5d6e21196565fdbc48682bf9bd/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-11-13 09:11:24.359000 [ 637 ] DEBUG : http://localhost:None "GET /v1.46/containers/e79598e80e0f6747fd06ffc28ebd7795028e5b5d6e21196565fdbc48682bf9bd/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-11-13 09:11:24.463000 [ 637 ] DEBUG : http://localhost:None "GET /v1.46/containers/e79598e80e0f6747fd06ffc28ebd7795028e5b5d6e21196565fdbc48682bf9bd/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-11-13 09:11:24.567000 [ 637 ] DEBUG : http://localhost:None "GET /v1.46/containers/e79598e80e0f6747fd06ffc28ebd7795028e5b5d6e21196565fdbc48682bf9bd/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-11-13 09:11:24.671000 [ 637 ] DEBUG : http://localhost:None "GET /v1.46/containers/e79598e80e0f6747fd06ffc28ebd7795028e5b5d6e21196565fdbc48682bf9bd/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-11-13 09:11:24.775000 [ 637 ] DEBUG : http://localhost:None "GET /v1.46/containers/e79598e80e0f6747fd06ffc28ebd7795028e5b5d6e21196565fdbc48682bf9bd/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-11-13 09:11:24.880000 [ 637 ] DEBUG : http://localhost:None "GET /v1.46/containers/e79598e80e0f6747fd06ffc28ebd7795028e5b5d6e21196565fdbc48682bf9bd/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-11-13 09:11:24.984000 [ 637 ] DEBUG : http://localhost:None "GET /v1.46/containers/e79598e80e0f6747fd06ffc28ebd7795028e5b5d6e21196565fdbc48682bf9bd/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-11-13 09:11:25.088000 [ 637 ] DEBUG : http://localhost:None "GET /v1.46/containers/e79598e80e0f6747fd06ffc28ebd7795028e5b5d6e21196565fdbc48682bf9bd/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-11-13 09:11:25.192000 [ 637 ] DEBUG : http://localhost:None "GET /v1.46/containers/e79598e80e0f6747fd06ffc28ebd7795028e5b5d6e21196565fdbc48682bf9bd/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-11-13 09:11:25.295000 [ 637 ] DEBUG : http://localhost:None "GET /v1.46/containers/e79598e80e0f6747fd06ffc28ebd7795028e5b5d6e21196565fdbc48682bf9bd/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-11-13 09:11:25.399000 [ 637 ] DEBUG : http://localhost:None "GET /v1.46/containers/e79598e80e0f6747fd06ffc28ebd7795028e5b5d6e21196565fdbc48682bf9bd/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-11-13 09:11:25.503000 [ 637 ] DEBUG : http://localhost:None "GET /v1.46/containers/e79598e80e0f6747fd06ffc28ebd7795028e5b5d6e21196565fdbc48682bf9bd/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-11-13 09:11:25.606000 [ 637 ] DEBUG : http://localhost:None "GET /v1.46/containers/e79598e80e0f6747fd06ffc28ebd7795028e5b5d6e21196565fdbc48682bf9bd/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-11-13 09:11:25.709000 [ 637 ] DEBUG : http://localhost:None "GET /v1.46/containers/e79598e80e0f6747fd06ffc28ebd7795028e5b5d6e21196565fdbc48682bf9bd/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-11-13 09:11:25.813000 [ 637 ] DEBUG : http://localhost:None "GET /v1.46/containers/e79598e80e0f6747fd06ffc28ebd7795028e5b5d6e21196565fdbc48682bf9bd/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-11-13 09:11:25.814000 [ 637 ] DEBUG : ClickHouse node started (cluster.py:3159, start) ------------------------------ Captured log call ------------------------------- 2025-11-13 09:11:25.817000 [ 637 ] DEBUG : Executing query ATTACH TABLE uk_price_paid UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7' ( price UInt32, date Date, postcode1 LowCardinality(String), postcode2 LowCardinality(String), type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4), is_new UInt8, duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2), addr1 String, addr2 String, street LowCardinality(String), locality LowCardinality(String), town LowCardinality(String), district LowCardinality(String), county LowCardinality(String) ) ENGINE = MergeTree ORDER BY (postcode1, postcode2, addr1, addr2) SETTINGS storage_policy = 'cow_policy_multi_disk' on node (cluster.py:3648, query) 2025-11-13 09:12:19.792000 [ 637 ] DEBUG : Executing query ATTACH TABLE uk_price_paid UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7' ( price UInt32, date Date, postcode1 LowCardinality(String), postcode2 LowCardinality(String), type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4), is_new UInt8, duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2), addr1 String, addr2 String, street LowCardinality(String), locality LowCardinality(String), town LowCardinality(String), district LowCardinality(String), county LowCardinality(String) ) ENGINE = MergeTree ORDER BY (postcode1, postcode2, addr1, addr2) SETTINGS storage_policy = 'cow_policy_multi_disk' on node (cluster.py:3648, query) 2025-11-13 09:13:14.310000 [ 637 ] DEBUG : Executing query ATTACH TABLE uk_price_paid UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7' ( price UInt32, date Date, postcode1 LowCardinality(String), postcode2 LowCardinality(String), type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4), is_new UInt8, duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2), addr1 String, addr2 String, street LowCardinality(String), locality LowCardinality(String), town LowCardinality(String), district LowCardinality(String), county LowCardinality(String) ) ENGINE = MergeTree ORDER BY (postcode1, postcode2, addr1, addr2) SETTINGS storage_policy = 'cow_policy_multi_disk' on node (cluster.py:3648, query) 2025-11-13 09:14:09.540000 [ 637 ] DEBUG : Executing query DROP TABLE IF EXISTS uk_price_paid SYNC on node (cluster.py:3648, query) _____________________________ test_all_replicated ______________________________ [gw0] linux -- Python 3.10.12 /usr/bin/python3 start_cluster = def test_all_replicated(start_cluster): cleanup([replica1, replica2]) > create_source_table(replica1, "source", True) test_attach_partition_using_copy/test.py:128: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ test_attach_partition_using_copy/test.py:40: in create_source_table node.query_with_retry( _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = sql = "\n ATTACH TABLE source UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7'\n (\n price UInt32,\n ...disk = disk(type = web, endpoint = 'https://raw.githubusercontent.com/ClickHouse/web-tables-demo/main/web/')\n " stdin = None, timeout = 60, settings = None, user = None, password = None database = None, host = None, ignore_error = False, retry_count = 3 sleep_time = 0.5 check_callback = at 0x7f9ccbbdc040> parse = False def query_with_retry( self, sql, stdin=None, timeout=None, settings=None, user=None, password=None, database=None, host=None, ignore_error=False, retry_count=20, sleep_time=0.5, check_callback=lambda x: True, parse=False, ): # logging.debug(f"Executing query {sql} on {self.name}") result = None exception_msg = "" for i in range(retry_count): try: result = self.query( sql, stdin=stdin, timeout=timeout, settings=settings, user=user, password=password, database=database, host=host, ignore_error=ignore_error, parse=parse, ) if check_callback(result): return result time.sleep(sleep_time) except QueryRuntimeException as ex: exception_msg = f"{type(ex).__name__}: {str(ex)}" # Container is down, this is likely due to server crash. if "No route to host" in str(ex): raise time.sleep(sleep_time) except Exception as ex: # logging.debug("Retry {} got exception {}".format(i + 1, ex)) exception_msg = f"{type(ex).__name__}: {str(ex)}" time.sleep(sleep_time) if result is not None: return result > raise Exception(f"Can't execute query {sql}\n{exception_msg}") E Exception: Can't execute query E ATTACH TABLE source UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7' E ( E price UInt32, E date Date, E postcode1 LowCardinality(String), E postcode2 LowCardinality(String), E type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4), E is_new UInt8, E duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2), E addr1 String, E addr2 String, E street LowCardinality(String), E locality LowCardinality(String), E town LowCardinality(String), E district LowCardinality(String), E county LowCardinality(String) E ) E ENGINE = ReplicatedMergeTree('/clickhouse/tables/1/source', 'replica1') E ORDER BY (postcode1, postcode2, addr1, addr2) E SETTINGS disk = disk(type = web, endpoint = 'https://raw.githubusercontent.com/ClickHouse/web-tables-demo/main/web/') E E QueryRuntimeException: Client failed! Return code: 198, stderr: Received exception from server (version 25.3.8): E Code: 198. DB::Exception: Received from 172.16.2.6:9000. DB::NetException. DB::NetException: Not found address of host: raw.githubusercontent.com: while loading disk metadata. Stack trace: E E 0. ./contrib/llvm-project/libcxx/include/__exception/exception.h:113: Poco::Exception::Exception(String const&, int) @ 0x0000000020e10280 E 1. ./build_docker/./src/Common/Exception.cpp:108: DB::Exception::Exception(DB::Exception::MessageMasked&&, int, bool) @ 0x0000000010b395b4 E 2. ./src/Common/Exception.h:112: DB::NetException::NetException(int, FormatStringHelperImpl::type>, String const&) @ 0x0000000010af5e99 E 3. ./build_docker/./src/Common/DNSResolver.cpp:113: DB::(anonymous namespace)::hostByName(String const&) @ 0x0000000010af1803 E 4. ./build_docker/./src/Common/DNSResolver.cpp:138: DB::DNSResolver::getResolvedIPAdressessWithFiltering(String const&) @ 0x0000000010aefdd3 E 5. ./build_docker/./src/Common/DNSResolver.cpp:256: DB::DNSResolver::resolveIPAddressWithCache(String const&) @ 0x0000000010af03b9 E 6. ./build_docker/./src/Common/DNSResolver.cpp:276: DB::DNSResolver::resolveHostAllInOriginOrder(String const&) @ 0x0000000010af0af0 E 7. ./build_docker/./src/Common/HostResolvePool.cpp:54: std::vector> std::__function::__policy_invoker> (String const&)>::__call_impl[abi:ne190107]> (String const&)>>(std::__function::__policy_storage const*, String const&) @ 0x0000000010fc2c29 E 8. ./contrib/llvm-project/libcxx/include/__functional/function.h:716: ? @ 0x0000000010fc0a8f E 9. ./build_docker/./src/Common/HostResolvePool.cpp:66: DB::HostResolver::HostResolver(std::function> (String const&)>&&, String, Poco::Timespan) @ 0x0000000010fc089f E 10. ./build_docker/./src/Common/HostResolvePool.cpp:53: DB::HostResolver::HostResolver(String, Poco::Timespan) @ 0x0000000010fc053e E 11. ./src/Common/HostResolvePool.h:62: std::shared_ptr DB::HostResolver::create(String const&)::make_shared_enabler::make_shared_enabler(String const&) @ 0x0000000010fc5b23 E 12. ./contrib/llvm-project/libcxx/include/__memory/construct_at.h:41: std::shared_ptr DB::HostResolver::create(String const&)::make_shared_enabler> std::allocate_shared[abi:ne190107] DB::HostResolver::create(String const&)::make_shared_enabler, std::allocator DB::HostResolver::create(String const&)::make_shared_enabler>, String const&, 0>(std::allocator DB::HostResolver::create(String const&)::make_shared_enabler> const&, String const&) @ 0x0000000010fc5826 E 13. ./contrib/llvm-project/libcxx/include/__memory/shared_ptr.h:851: DB::HostResolversPool::getResolver(String const&) @ 0x0000000010fc2af1 E 14. ./build_docker/./src/Common/HTTPConnectionPool.cpp:671: DB::EndpointConnectionPool::prepareNewConnection(DB::ConnectionTimeouts const&, unsigned long*) @ 0x0000000010fb4fb6 E 15. ./build_docker/./src/Common/HTTPConnectionPool.cpp:590: DB::EndpointConnectionPool::getConnection(DB::ConnectionTimeouts const&, unsigned long*) @ 0x0000000010fb4218 E 16. ./build_docker/./src/IO/HTTPCommon.cpp:63: DB::makeHTTPSession(DB::HTTPConnectionGroupType, Poco::URI const&, DB::ConnectionTimeouts const&, DB::ProxyConfiguration const&, unsigned long*) @ 0x0000000010fccd2d E 17. ./build_docker/./src/IO/ReadWriteBufferFromHTTP.cpp:267: DB::ReadWriteBufferFromHTTP::callImpl(Poco::Net::HTTPResponse&, String const&, std::optional const&, bool) const @ 0x0000000013cfa745 E 18. ./build_docker/./src/IO/ReadWriteBufferFromHTTP.cpp:285: DB::ReadWriteBufferFromHTTP::callWithRedirects(Poco::Net::HTTPResponse&, String const&, std::optional const&) @ 0x0000000013cfa9f4 E 19. ./build_docker/./src/IO/ReadWriteBufferFromHTTP.cpp:408: DB::ReadWriteBufferFromHTTP::initialize() @ 0x0000000013cfb4a0 E 20. ./build_docker/./src/IO/ReadWriteBufferFromHTTP.cpp:472: void std::__function::__policy_invoker::__call_impl[abi:ne190107]>(std::__function::__policy_storage const*) @ 0x0000000013cfe657 E 21. ./contrib/llvm-project/libcxx/include/__functional/function.h:716: ? @ 0x0000000013cf7eb9 E 22. ./build_docker/./src/IO/ReadWriteBufferFromHTTP.cpp:465: DB::ReadWriteBufferFromHTTP::nextImpl() @ 0x0000000013cfcb61 E 23. DB::ReadBuffer::next() @ 0x000000000891fc7b E 24. ./src/IO/ReadBuffer.h:96: DB::WebObjectStorage::loadFiles(String const&, std::unique_lock const&) const @ 0x00000000179744d9 E 25. ./build_docker/./src/Disks/ObjectStorages/Web/WebObjectStorage.cpp:225: DB::WebObjectStorage::tryGetFileInfo(String const&) const @ 0x000000001797795a E 26. ./build_docker/./src/Disks/ObjectStorages/Web/WebObjectStorage.cpp:185: DB::WebObjectStorage::tryGetFileInfo(String const&) const @ 0x000000001797737a E 27. ./build_docker/./src/Disks/ObjectStorages/Web/MetadataStorageFromStaticFilesWebServer.cpp:106: DB::MetadataStorageFromStaticFilesWebServer::getStorageObjectsIfExist(String const&) const @ 0x0000000017971925 E 28. ./build_docker/./src/Disks/ObjectStorages/DiskObjectStorage.cpp:785: DB::DiskObjectStorage::readFileIfExists(String const&, DB::ReadSettings const&, std::optional, std::optional) const @ 0x00000000178d7f12 E 29. ./build_docker/./src/Storages/MergeTree/MergeTreeData.cpp:380: DB::MergeTreeData::initializeDirectoriesAndFormatVersion(String const&, bool, String const&, bool) @ 0x000000001b300bdd E 30. ./build_docker/./src/Storages/StorageReplicatedMergeTree.cpp:414: DB::StorageReplicatedMergeTree::StorageReplicatedMergeTree(DB::TableZnodeInfo const&, DB::LoadingStrictnessLevel, DB::StorageID const&, String const&, DB::StorageInMemoryMetadata const&, std::shared_ptr, String const&, DB::MergeTreeData::MergingParams const&, std::unique_ptr>, bool, DB::ZooKeeperRetriesInfo const&) @ 0x000000001ac4fa34 E 31. ./contrib/llvm-project/libcxx/include/__memory/construct_at.h:41: std::shared_ptr std::allocate_shared[abi:ne190107], DB::TableZnodeInfo&, DB::LoadingStrictnessLevel const&, DB::StorageID const&, String const&, DB::StorageInMemoryMetadata&, std::shared_ptr&, String&, DB::MergeTreeData::MergingParams&, std::unique_ptr>, bool&, DB::ZooKeeperRetriesInfo&, 0>(std::allocator const&, DB::TableZnodeInfo&, DB::LoadingStrictnessLevel const&, DB::StorageID const&, String const&, DB::StorageInMemoryMetadata&, std::shared_ptr&, String&, DB::MergeTreeData::MergingParams&, std::unique_ptr>&&, bool&, DB::ZooKeeperRetriesInfo&) @ 0x000000001b7d5151 E . (DNS_ERROR) E (query: ATTACH TABLE source UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7' E ( E price UInt32, E date Date, E postcode1 LowCardinality(String), E postcode2 LowCardinality(String), E type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4), E is_new UInt8, E duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2), E addr1 String, E addr2 String, E street LowCardinality(String), E locality LowCardinality(String), E town LowCardinality(String), E district LowCardinality(String), E county LowCardinality(String) E ) E ENGINE = ReplicatedMergeTree('/clickhouse/tables/1/source', 'replica1') E ORDER BY (postcode1, postcode2, addr1, addr2) E SETTINGS disk = disk(type = web, endpoint = 'https://raw.githubusercontent.com/ClickHouse/web-tables-demo/main/web/') E ) helpers/cluster.py:3712: Exception ---------------------------- Captured stdout setup ----------------------------- Copy common default production configuration from /clickhouse-config. Files: config.xml, users.xml Copy common default production configuration from /clickhouse-config. Files: config.xml, users.xml ------------------------------ Captured log setup ------------------------------ 2025-11-13 09:11:12.125000 [ 628 ] DEBUG : Command:[docker ps | wc -l] (cluster.py:121, run_and_check) 2025-11-13 09:11:12.150000 [ 628 ] DEBUG : Stdout:1 (cluster.py:145, run_and_check) 2025-11-13 09:11:12.150000 [ 628 ] DEBUG : No running containers (conftest.py:95, cleanup_environment) 2025-11-13 09:11:12.150000 [ 628 ] DEBUG : Pruning Docker networks (conftest.py:97, cleanup_environment) 2025-11-13 09:11:12.150000 [ 628 ] DEBUG : Command:[docker network prune --force] (cluster.py:121, run_and_check) 2025-11-13 09:11:12.175000 [ 628 ] DEBUG : Command:[sysctl net.ipv4.ip_local_port_range='55000 65535'] (cluster.py:121, run_and_check) 2025-11-13 09:11:12.177000 [ 628 ] DEBUG : Stdout:net.ipv4.ip_local_port_range = 55000 65535 (cluster.py:145, run_and_check) 2025-11-13 09:11:12.177000 [ 628 ] INFO : Running tests in /ClickHouse/tests/integration/test_attach_partition_using_copy/test.py (cluster.py:2738, start) 2025-11-13 09:11:12.177000 [ 628 ] DEBUG : Cluster start called. is_up=False (cluster.py:2745, start) 2025-11-13 09:11:12.203000 [ 628 ] DEBUG : Docker networks for project roottestattachpartitionusingcopy-gw0 are NETWORK ID NAME DRIVER SCOPE (cluster.py:830, print_all_docker_pieces) 2025-11-13 09:11:12.229000 [ 628 ] DEBUG : Docker containers for project roottestattachpartitionusingcopy-gw0 are CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES (cluster.py:838, print_all_docker_pieces) 2025-11-13 09:11:12.255000 [ 628 ] DEBUG : Docker volumes for project roottestattachpartitionusingcopy-gw0 are DRIVER VOLUME NAME (cluster.py:846, print_all_docker_pieces) 2025-11-13 09:11:12.255000 [ 628 ] DEBUG : Cleanup called (cluster.py:851, cleanup) 2025-11-13 09:11:12.282000 [ 628 ] DEBUG : Docker networks for project roottestattachpartitionusingcopy-gw0 are NETWORK ID NAME DRIVER SCOPE (cluster.py:830, print_all_docker_pieces) 2025-11-13 09:11:12.308000 [ 628 ] DEBUG : Docker containers for project roottestattachpartitionusingcopy-gw0 are CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES (cluster.py:838, print_all_docker_pieces) 2025-11-13 09:11:12.337000 [ 628 ] DEBUG : Docker volumes for project roottestattachpartitionusingcopy-gw0 are DRIVER VOLUME NAME (cluster.py:846, print_all_docker_pieces) 2025-11-13 09:11:12.337000 [ 628 ] DEBUG : Command:[docker container list --all --filter name='^/roottestattachpartitionusingcopy-gw0-.*-1$' --format '{{.ID}}:{{.Names}}'] (cluster.py:121, run_and_check) 2025-11-13 09:11:12.361000 [ 628 ] DEBUG : Unstopped containers: {} (cluster.py:865, cleanup) 2025-11-13 09:11:12.361000 [ 628 ] DEBUG : No running containers for project: roottestattachpartitionusingcopy-gw0 (cluster.py:879, cleanup) 2025-11-13 09:11:12.362000 [ 628 ] DEBUG : Trying to prune unused networks... (cluster.py:885, cleanup) 2025-11-13 09:11:12.386000 [ 628 ] DEBUG : Trying to prune unused images... (cluster.py:901, cleanup) 2025-11-13 09:11:12.386000 [ 628 ] DEBUG : Command:[docker image prune -f] (cluster.py:121, run_and_check) 2025-11-13 09:11:12.410000 [ 628 ] DEBUG : Stderr:Error response from daemon: a prune operation is already running (cluster.py:147, run_and_check) 2025-11-13 09:11:12.410000 [ 628 ] DEBUG : Exitcode:1 (cluster.py:149, run_and_check) 2025-11-13 09:11:12.411000 [ 628 ] DEBUG : Trying to prune unused volumes... (cluster.py:910, cleanup) 2025-11-13 09:11:12.411000 [ 628 ] DEBUG : Command:[docker volume ls | wc -l] (cluster.py:121, run_and_check) 2025-11-13 09:11:12.433000 [ 628 ] DEBUG : Stdout:1 (cluster.py:145, run_and_check) 2025-11-13 09:11:12.433000 [ 628 ] DEBUG : Volumes pruned: 1 (cluster.py:915, cleanup) 2025-11-13 09:11:12.434000 [ 628 ] DEBUG : Setup directory for instance: replica1 (cluster.py:2758, start) 2025-11-13 09:11:12.435000 [ 628 ] DEBUG : Create directory for configuration generated in this helper (cluster.py:4628, create_dir) 2025-11-13 09:11:12.435000 [ 628 ] DEBUG : Create directory for common tests configuration (cluster.py:4633, create_dir) 2025-11-13 09:11:12.436000 [ 628 ] DEBUG : Copy common configuration from helpers (cluster.py:4653, create_dir) 2025-11-13 09:11:12.437000 [ 628 ] DEBUG : Generate and write macros file (cluster.py:4705, create_dir) 2025-11-13 09:11:12.437000 [ 628 ] DEBUG : Copy custom test config files ['/ClickHouse/tests/integration/test_attach_partition_using_copy/configs/remote_servers.xml'] to /ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-1-gw0/replica1/configs/config.d (cluster.py:4741, create_dir) 2025-11-13 09:11:12.438000 [ 628 ] DEBUG : Setup database dir /ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-1-gw0/replica1/database (cluster.py:4758, create_dir) 2025-11-13 09:11:12.438000 [ 628 ] DEBUG : Setup logs dir /ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-1-gw0/replica1/logs (cluster.py:4769, create_dir) 2025-11-13 09:11:12.438000 [ 628 ] DEBUG : Entrypoint cmd: ["clickhouse", "server", "--config-file=/etc/clickhouse-server/config.xml", "--log-file=/var/log/clickhouse-server/clickhouse-server.log", "--errorlog-file=/var/log/clickhouse-server/clickhouse-server.err.log", "--"] (cluster.py:4850, create_dir) 2025-11-13 09:11:12.438000 [ 628 ] DEBUG : Setup directory for instance: replica2 (cluster.py:2758, start) 2025-11-13 09:11:12.439000 [ 628 ] DEBUG : Create directory for configuration generated in this helper (cluster.py:4628, create_dir) 2025-11-13 09:11:12.440000 [ 628 ] DEBUG : Create directory for common tests configuration (cluster.py:4633, create_dir) 2025-11-13 09:11:12.440000 [ 628 ] DEBUG : Copy common configuration from helpers (cluster.py:4653, create_dir) 2025-11-13 09:11:12.441000 [ 628 ] DEBUG : Generate and write macros file (cluster.py:4705, create_dir) 2025-11-13 09:11:12.441000 [ 628 ] DEBUG : Copy custom test config files ['/ClickHouse/tests/integration/test_attach_partition_using_copy/configs/remote_servers.xml'] to /ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-1-gw0/replica2/configs/config.d (cluster.py:4741, create_dir) 2025-11-13 09:11:12.441000 [ 628 ] DEBUG : Setup database dir /ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-1-gw0/replica2/database (cluster.py:4758, create_dir) 2025-11-13 09:11:12.442000 [ 628 ] DEBUG : Setup logs dir /ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-1-gw0/replica2/logs (cluster.py:4769, create_dir) 2025-11-13 09:11:12.442000 [ 628 ] DEBUG : Entrypoint cmd: ["clickhouse", "server", "--config-file=/etc/clickhouse-server/config.xml", "--log-file=/var/log/clickhouse-server/clickhouse-server.log", "--errorlog-file=/var/log/clickhouse-server/clickhouse-server.err.log", "--"] (cluster.py:4850, create_dir) 2025-11-13 09:11:12.442000 [ 628 ] DEBUG : Env {'ASAN_OPTIONS': 'use_sigaltstack=0', 'TSAN_OPTIONS': 'use_sigaltstack=0', 'CLICKHOUSE_WATCHDOG_ENABLE': '0', 'CLICKHOUSE_NATS_TLS_SECURE': '0', 'LLVM_PROFILE_FILE': '/var/lib/clickhouse/server_%h_%p_%m.profraw', 'keeper_binary': '/clickhouse', 'keeper_cmd_prefix': 'clickhouse keeper', 'image': 'altinityinfra/integration-test:5ccda723c1fc', 'user': '0', 'keeper_fs': 'bind', 'keeper_logs_dir1': '/ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-1-gw0/keeper1/log', 'keeper_config_dir1': '/ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-1-gw0/keeper1/config', 'keeper_db_dir1': '/ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-1-gw0/keeper1/coordination', 'keeper_logs_dir2': '/ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-1-gw0/keeper2/log', 'keeper_config_dir2': '/ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-1-gw0/keeper2/config', 'keeper_db_dir2': '/ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-1-gw0/keeper2/coordination', 'keeper_logs_dir3': '/ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-1-gw0/keeper3/log', 'keeper_config_dir3': '/ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-1-gw0/keeper3/config', 'keeper_db_dir3': '/ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-1-gw0/keeper3/coordination'} stored in /ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-1-gw0/.env (cluster.py:96, _create_env_file) 2025-11-13 09:11:12.443000 [ 628 ] DEBUG : Trying paths: ['/root/.docker/config.json', '/root/.dockercfg'] (config.py:21, find_config_file) 2025-11-13 09:11:12.443000 [ 628 ] DEBUG : No config file found (config.py:28, find_config_file) 2025-11-13 09:11:12.443000 [ 628 ] DEBUG : Trying paths: ['/root/.docker/config.json', '/root/.dockercfg'] (config.py:21, find_config_file) 2025-11-13 09:11:12.443000 [ 628 ] DEBUG : No config file found (config.py:28, find_config_file) 2025-11-13 09:11:12.456000 [ 628 ] DEBUG : http://localhost:None "GET /version HTTP/1.1" 200 826 (connectionpool.py:547, _make_request) 2025-11-13 09:11:12.457000 [ 628 ] DEBUG : Command:[docker compose --env-file /ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-1-gw0/.env --project-name roottestattachpartitionusingcopy-gw0 --file /ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-1-gw0/replica1/docker-compose.yml --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_keeper.yml --file /ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-1-gw0/replica2/docker-compose.yml pull] (cluster.py:121, run_and_check) 2025-11-13 09:11:22.897000 [ 628 ] DEBUG : Stderr: replica2 Skipped - Image is already being pulled by zoo3 (cluster.py:147, run_and_check) 2025-11-13 09:11:22.897000 [ 628 ] DEBUG : Stderr: replica1 Skipped - Image is already being pulled by zoo3 (cluster.py:147, run_and_check) 2025-11-13 09:11:22.897000 [ 628 ] DEBUG : Stderr: zoo1 Skipped - Image is already being pulled by zoo3 (cluster.py:147, run_and_check) 2025-11-13 09:11:22.898000 [ 628 ] DEBUG : Stderr: zoo2 Skipped - Image is already being pulled by zoo3 (cluster.py:147, run_and_check) 2025-11-13 09:11:22.898000 [ 628 ] DEBUG : Stderr: zoo3 Pulling (cluster.py:147, run_and_check) 2025-11-13 09:11:22.898000 [ 628 ] DEBUG : Stderr: zoo3 Pulled (cluster.py:147, run_and_check) 2025-11-13 09:11:22.898000 [ 628 ] DEBUG : Setup ZooKeeper (cluster.py:2799, start) 2025-11-13 09:11:22.898000 [ 628 ] DEBUG : Creating internal ZooKeeper dirs: ['/ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-1-gw0/keeper1/log', '/ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-1-gw0/keeper1/config', '/ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-1-gw0/keeper1/coordination', '/ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-1-gw0/keeper2/log', '/ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-1-gw0/keeper2/config', '/ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-1-gw0/keeper2/coordination', '/ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-1-gw0/keeper3/log', '/ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-1-gw0/keeper3/config', '/ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-1-gw0/keeper3/coordination'] (cluster.py:2800, start) 2025-11-13 09:11:22.900000 [ 628 ] DEBUG : Command:[docker compose --project-name roottestattachpartitionusingcopy-gw0 --env-file /ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-1-gw0/.env --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_keeper.yml --verbose up -d] (cluster.py:121, run_and_check) 2025-11-13 09:11:23.826000 [ 628 ] DEBUG : Stderr:time="2025-11-13T09:11:22Z" level=trace msg="Docker Desktop integration not enabled" (cluster.py:147, run_and_check) 2025-11-13 09:11:23.826000 [ 628 ] DEBUG : Stderr: Network roottestattachpartitionusingcopy-gw0_default Creating (cluster.py:147, run_and_check) 2025-11-13 09:11:23.826000 [ 628 ] DEBUG : Stderr: Network roottestattachpartitionusingcopy-gw0_default Created (cluster.py:147, run_and_check) 2025-11-13 09:11:23.827000 [ 628 ] DEBUG : Stderr: Container roottestattachpartitionusingcopy-gw0-zoo3-1 Creating (cluster.py:147, run_and_check) 2025-11-13 09:11:23.827000 [ 628 ] DEBUG : Stderr: Container roottestattachpartitionusingcopy-gw0-zoo2-1 Creating (cluster.py:147, run_and_check) 2025-11-13 09:11:23.827000 [ 628 ] DEBUG : Stderr: Container roottestattachpartitionusingcopy-gw0-zoo1-1 Creating (cluster.py:147, run_and_check) 2025-11-13 09:11:23.827000 [ 628 ] DEBUG : Stderr: Container roottestattachpartitionusingcopy-gw0-zoo2-1 Created (cluster.py:147, run_and_check) 2025-11-13 09:11:23.827000 [ 628 ] DEBUG : Stderr: Container roottestattachpartitionusingcopy-gw0-zoo3-1 Created (cluster.py:147, run_and_check) 2025-11-13 09:11:23.827000 [ 628 ] DEBUG : Stderr: Container roottestattachpartitionusingcopy-gw0-zoo1-1 Created (cluster.py:147, run_and_check) 2025-11-13 09:11:23.827000 [ 628 ] DEBUG : Stderr: Container roottestattachpartitionusingcopy-gw0-zoo1-1 Starting (cluster.py:147, run_and_check) 2025-11-13 09:11:23.827000 [ 628 ] DEBUG : Stderr: Container roottestattachpartitionusingcopy-gw0-zoo2-1 Starting (cluster.py:147, run_and_check) 2025-11-13 09:11:23.827000 [ 628 ] DEBUG : Stderr: Container roottestattachpartitionusingcopy-gw0-zoo3-1 Starting (cluster.py:147, run_and_check) 2025-11-13 09:11:23.827000 [ 628 ] DEBUG : Stderr: Container roottestattachpartitionusingcopy-gw0-zoo1-1 Started (cluster.py:147, run_and_check) 2025-11-13 09:11:23.827000 [ 628 ] DEBUG : Stderr: Container roottestattachpartitionusingcopy-gw0-zoo3-1 Started (cluster.py:147, run_and_check) 2025-11-13 09:11:23.827000 [ 628 ] DEBUG : Stderr: Container roottestattachpartitionusingcopy-gw0-zoo2-1 Started (cluster.py:147, run_and_check) 2025-11-13 09:11:23.828000 [ 628 ] DEBUG : Stderr:time="2025-11-13T09:11:23Z" level=debug msg="otel error" error="" (cluster.py:147, run_and_check) 2025-11-13 09:11:23.828000 [ 628 ] DEBUG : Stderr:time="2025-11-13T09:11:23Z" level=debug msg="otel error" error="" (cluster.py:147, run_and_check) 2025-11-13 09:11:23.828000 [ 628 ] DEBUG : Wait ZooKeeper to start (cluster.py:2436, wait_zookeeper_to_start) 2025-11-13 09:11:23.828000 [ 628 ] DEBUG : get_instance_ip instance_name=zoo1 (cluster.py:2005, get_instance_ip) 2025-11-13 09:11:23.830000 [ 628 ] DEBUG : http://localhost:None "GET /v1.46/containers/roottestattachpartitionusingcopy-gw0-zoo1-1/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-11-13 09:11:23.830000 [ 628 ] DEBUG : get_kazoo_client: zoo1, ip:172.16.2.2, port:2181, use_ssl:False (cluster.py:3312, get_kazoo_client) 2025-11-13 09:11:23.832000 [ 628 ] INFO : Connecting to 172.16.2.2(172.16.2.2):2181, use_ssl: False (connection.py:650, _connect) 2025-11-13 09:11:23.832000 [ 628 ] WARNING : Connection dropped: socket connection error: Connection refused (connection.py:622, _connect_attempt) 2025-11-13 09:11:23.904000 [ 628 ] INFO : Connecting to 172.16.2.2(172.16.2.2):2181, use_ssl: False (connection.py:650, _connect) 2025-11-13 09:11:23.905000 [ 628 ] WARNING : Connection dropped: socket connection error: Connection refused (connection.py:622, _connect_attempt) 2025-11-13 09:11:23.999000 [ 628 ] INFO : Connecting to 172.16.2.2(172.16.2.2):2181, use_ssl: False (connection.py:650, _connect) 2025-11-13 09:11:23.999000 [ 628 ] WARNING : Connection dropped: socket connection error: Connection refused (connection.py:622, _connect_attempt) 2025-11-13 09:11:24.201000 [ 628 ] INFO : Connecting to 172.16.2.2(172.16.2.2):2181, use_ssl: False (connection.py:650, _connect) 2025-11-13 09:11:24.202000 [ 628 ] WARNING : Connection dropped: socket connection error: Connection refused (connection.py:622, _connect_attempt) 2025-11-13 09:11:24.648000 [ 628 ] INFO : Connecting to 172.16.2.2(172.16.2.2):2181, use_ssl: False (connection.py:650, _connect) 2025-11-13 09:11:24.648000 [ 628 ] WARNING : Connection dropped: socket connection error: Connection refused (connection.py:622, _connect_attempt) 2025-11-13 09:11:25.869000 [ 628 ] INFO : Connecting to 172.16.2.2(172.16.2.2):2181, use_ssl: False (connection.py:650, _connect) 2025-11-13 09:11:25.869000 [ 628 ] WARNING : Connection dropped: socket connection error: Connection refused (connection.py:622, _connect_attempt) 2025-11-13 09:11:27.498000 [ 628 ] INFO : Connecting to 172.16.2.2(172.16.2.2):2181, use_ssl: False (connection.py:650, _connect) 2025-11-13 09:11:27.499000 [ 628 ] DEBUG : Sending request(xid=None): Connect(protocol_version=0, last_zxid_seen=0, time_out=30000, session_id=0, passwd=b'\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00', read_only=None) (connection.py:312, _submit) 2025-11-13 09:11:27.505000 [ 628 ] INFO : Zookeeper connection established, state: CONNECTED (client.py:532, _session_callback) 2025-11-13 09:11:27.506000 [ 628 ] DEBUG : Sending request(xid=1): GetChildren(path='/', watcher=None) (connection.py:312, _submit) 2025-11-13 09:11:27.507000 [ 628 ] DEBUG : Received response(xid=1): ['keeper'] (connection.py:410, _read_response) 2025-11-13 09:11:27.508000 [ 628 ] DEBUG : Sending request(xid=2): Close() (connection.py:312, _submit) 2025-11-13 09:11:27.513000 [ 628 ] WARNING : Connection dropped: socket connection broken (connection.py:622, _connect_attempt) 2025-11-13 09:11:27.513000 [ 628 ] WARNING : Transition to CONNECTING (connection.py:626, _connect_attempt) 2025-11-13 09:11:27.513000 [ 628 ] INFO : Zookeeper connection lost (client.py:543, _session_callback) 2025-11-13 09:11:27.592000 [ 628 ] WARNING : Failed connecting to Zookeeper within the connection retry policy. (connection.py:515, zk_loop) 2025-11-13 09:11:27.592000 [ 628 ] INFO : Zookeeper session closed, state: CLOSED (client.py:537, _session_callback) 2025-11-13 09:11:27.593000 [ 628 ] DEBUG : get_instance_ip instance_name=zoo2 (cluster.py:2005, get_instance_ip) 2025-11-13 09:11:27.596000 [ 628 ] DEBUG : http://localhost:None "GET /v1.46/containers/roottestattachpartitionusingcopy-gw0-zoo2-1/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-11-13 09:11:27.597000 [ 628 ] DEBUG : get_kazoo_client: zoo2, ip:172.16.2.4, port:2181, use_ssl:False (cluster.py:3312, get_kazoo_client) 2025-11-13 09:11:27.598000 [ 628 ] INFO : Connecting to 172.16.2.4(172.16.2.4):2181, use_ssl: False (connection.py:650, _connect) 2025-11-13 09:11:27.599000 [ 628 ] DEBUG : Sending request(xid=None): Connect(protocol_version=0, last_zxid_seen=0, time_out=30000, session_id=0, passwd=b'\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00', read_only=None) (connection.py:312, _submit) 2025-11-13 09:11:27.608000 [ 628 ] INFO : Zookeeper connection established, state: CONNECTED (client.py:532, _session_callback) 2025-11-13 09:11:27.608000 [ 628 ] DEBUG : Sending request(xid=1): GetChildren(path='/', watcher=None) (connection.py:312, _submit) 2025-11-13 09:11:27.610000 [ 628 ] DEBUG : Received response(xid=1): ['keeper'] (connection.py:410, _read_response) 2025-11-13 09:11:27.610000 [ 628 ] DEBUG : Sending request(xid=2): Close() (connection.py:312, _submit) 2025-11-13 09:11:27.616000 [ 628 ] WARNING : Connection dropped: socket connection broken (connection.py:622, _connect_attempt) 2025-11-13 09:11:27.616000 [ 628 ] WARNING : Transition to CONNECTING (connection.py:626, _connect_attempt) 2025-11-13 09:11:27.616000 [ 628 ] INFO : Zookeeper connection lost (client.py:543, _session_callback) 2025-11-13 09:11:27.713000 [ 628 ] WARNING : Failed connecting to Zookeeper within the connection retry policy. (connection.py:515, zk_loop) 2025-11-13 09:11:27.713000 [ 628 ] INFO : Zookeeper session closed, state: CLOSED (client.py:537, _session_callback) 2025-11-13 09:11:27.714000 [ 628 ] DEBUG : get_instance_ip instance_name=zoo3 (cluster.py:2005, get_instance_ip) 2025-11-13 09:11:27.717000 [ 628 ] DEBUG : http://localhost:None "GET /v1.46/containers/roottestattachpartitionusingcopy-gw0-zoo3-1/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-11-13 09:11:27.717000 [ 628 ] DEBUG : get_kazoo_client: zoo3, ip:172.16.2.3, port:2181, use_ssl:False (cluster.py:3312, get_kazoo_client) 2025-11-13 09:11:27.719000 [ 628 ] INFO : Connecting to 172.16.2.3(172.16.2.3):2181, use_ssl: False (connection.py:650, _connect) 2025-11-13 09:11:27.719000 [ 628 ] DEBUG : Sending request(xid=None): Connect(protocol_version=0, last_zxid_seen=0, time_out=30000, session_id=0, passwd=b'\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00', read_only=None) (connection.py:312, _submit) 2025-11-13 09:11:27.728000 [ 628 ] INFO : Zookeeper connection established, state: CONNECTED (client.py:532, _session_callback) 2025-11-13 09:11:27.728000 [ 628 ] DEBUG : Sending request(xid=1): GetChildren(path='/', watcher=None) (connection.py:312, _submit) 2025-11-13 09:11:27.729000 [ 628 ] DEBUG : Received response(xid=1): ['keeper'] (connection.py:410, _read_response) 2025-11-13 09:11:27.730000 [ 628 ] DEBUG : Sending request(xid=2): Close() (connection.py:312, _submit) 2025-11-13 09:11:27.735000 [ 628 ] WARNING : Connection dropped: socket connection broken (connection.py:622, _connect_attempt) 2025-11-13 09:11:27.735000 [ 628 ] WARNING : Transition to CONNECTING (connection.py:626, _connect_attempt) 2025-11-13 09:11:27.735000 [ 628 ] INFO : Zookeeper connection lost (client.py:543, _session_callback) 2025-11-13 09:11:27.835000 [ 628 ] WARNING : Failed connecting to Zookeeper within the connection retry policy. (connection.py:515, zk_loop) 2025-11-13 09:11:27.836000 [ 628 ] INFO : Zookeeper session closed, state: CLOSED (client.py:537, _session_callback) 2025-11-13 09:11:27.836000 [ 628 ] DEBUG : All instances of ZooKeeper started: ('zoo1', 'zoo2', 'zoo3') (cluster.py:2452, wait_zookeeper_nodes_to_start) 2025-11-13 09:11:27.836000 [ 628 ] DEBUG : ('Trying to create ClickHouse instance by command %s', 'docker compose --env-file /ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-1-gw0/.env --project-name roottestattachpartitionusingcopy-gw0 --file /ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-1-gw0/replica1/docker-compose.yml --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_keeper.yml --file /ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-1-gw0/replica2/docker-compose.yml up -d --no-recreate') (cluster.py:3139, start) 2025-11-13 09:11:27.837000 [ 628 ] DEBUG : Command:[docker compose --env-file /ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-1-gw0/.env --project-name roottestattachpartitionusingcopy-gw0 --file /ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-1-gw0/replica1/docker-compose.yml --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_keeper.yml --file /ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-1-gw0/replica2/docker-compose.yml up -d --no-recreate] (cluster.py:121, run_and_check) 2025-11-13 09:11:28.404000 [ 628 ] DEBUG : Stderr: Container roottestattachpartitionusingcopy-gw0-zoo2-1 Running (cluster.py:147, run_and_check) 2025-11-13 09:11:28.404000 [ 628 ] DEBUG : Stderr: Container roottestattachpartitionusingcopy-gw0-zoo1-1 Running (cluster.py:147, run_and_check) 2025-11-13 09:11:28.405000 [ 628 ] DEBUG : Stderr: Container roottestattachpartitionusingcopy-gw0-zoo3-1 Running (cluster.py:147, run_and_check) 2025-11-13 09:11:28.405000 [ 628 ] DEBUG : Stderr: Container roottestattachpartitionusingcopy-gw0-replica1-1 Creating (cluster.py:147, run_and_check) 2025-11-13 09:11:28.405000 [ 628 ] DEBUG : Stderr: Container roottestattachpartitionusingcopy-gw0-replica2-1 Creating (cluster.py:147, run_and_check) 2025-11-13 09:11:28.405000 [ 628 ] DEBUG : Stderr: Container roottestattachpartitionusingcopy-gw0-replica1-1 Created (cluster.py:147, run_and_check) 2025-11-13 09:11:28.405000 [ 628 ] DEBUG : Stderr: Container roottestattachpartitionusingcopy-gw0-replica2-1 Created (cluster.py:147, run_and_check) 2025-11-13 09:11:28.405000 [ 628 ] DEBUG : Stderr: Container roottestattachpartitionusingcopy-gw0-replica1-1 Starting (cluster.py:147, run_and_check) 2025-11-13 09:11:28.405000 [ 628 ] DEBUG : Stderr: Container roottestattachpartitionusingcopy-gw0-replica2-1 Starting (cluster.py:147, run_and_check) 2025-11-13 09:11:28.405000 [ 628 ] DEBUG : Stderr: Container roottestattachpartitionusingcopy-gw0-replica2-1 Started (cluster.py:147, run_and_check) 2025-11-13 09:11:28.405000 [ 628 ] DEBUG : Stderr: Container roottestattachpartitionusingcopy-gw0-replica1-1 Started (cluster.py:147, run_and_check) 2025-11-13 09:11:28.406000 [ 628 ] DEBUG : ClickHouse instance created (cluster.py:3147, start) 2025-11-13 09:11:28.406000 [ 628 ] DEBUG : get_instance_ip instance_name=replica1 (cluster.py:2005, get_instance_ip) 2025-11-13 09:11:28.409000 [ 628 ] DEBUG : http://localhost:None "GET /v1.46/containers/roottestattachpartitionusingcopy-gw0-replica1-1/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-11-13 09:11:28.409000 [ 628 ] DEBUG : get_instance_ip instance_name=replica1 (cluster.py:2015, get_instance_global_ipv6) 2025-11-13 09:11:28.411000 [ 628 ] DEBUG : http://localhost:None "GET /v1.46/containers/roottestattachpartitionusingcopy-gw0-replica1-1/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-11-13 09:11:28.412000 [ 628 ] DEBUG : Waiting for ClickHouse start in replica1, ip: 172.16.2.6... (cluster.py:3155, start) 2025-11-13 09:11:28.413000 [ 628 ] DEBUG : http://localhost:None "GET /v1.46/containers/roottestattachpartitionusingcopy-gw0-replica1-1/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-11-13 09:11:28.415000 [ 628 ] DEBUG : http://localhost:None "GET /v1.46/containers/4a932431106e01673911aea4fe254c4f33fcec84bf2b5af8dc087aa8036ac585/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-11-13 09:11:28.518000 [ 628 ] DEBUG : http://localhost:None "GET /v1.46/containers/4a932431106e01673911aea4fe254c4f33fcec84bf2b5af8dc087aa8036ac585/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-11-13 09:11:28.621000 [ 628 ] DEBUG : http://localhost:None "GET /v1.46/containers/4a932431106e01673911aea4fe254c4f33fcec84bf2b5af8dc087aa8036ac585/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-11-13 09:11:28.724000 [ 628 ] DEBUG : http://localhost:None "GET /v1.46/containers/4a932431106e01673911aea4fe254c4f33fcec84bf2b5af8dc087aa8036ac585/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-11-13 09:11:28.826000 [ 628 ] DEBUG : http://localhost:None "GET /v1.46/containers/4a932431106e01673911aea4fe254c4f33fcec84bf2b5af8dc087aa8036ac585/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-11-13 09:11:28.929000 [ 628 ] DEBUG : http://localhost:None "GET /v1.46/containers/4a932431106e01673911aea4fe254c4f33fcec84bf2b5af8dc087aa8036ac585/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-11-13 09:11:29.032000 [ 628 ] DEBUG : http://localhost:None "GET /v1.46/containers/4a932431106e01673911aea4fe254c4f33fcec84bf2b5af8dc087aa8036ac585/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-11-13 09:11:29.135000 [ 628 ] DEBUG : http://localhost:None "GET /v1.46/containers/4a932431106e01673911aea4fe254c4f33fcec84bf2b5af8dc087aa8036ac585/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-11-13 09:11:29.238000 [ 628 ] DEBUG : http://localhost:None "GET /v1.46/containers/4a932431106e01673911aea4fe254c4f33fcec84bf2b5af8dc087aa8036ac585/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-11-13 09:11:29.341000 [ 628 ] DEBUG : http://localhost:None "GET /v1.46/containers/4a932431106e01673911aea4fe254c4f33fcec84bf2b5af8dc087aa8036ac585/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-11-13 09:11:29.444000 [ 628 ] DEBUG : http://localhost:None "GET /v1.46/containers/4a932431106e01673911aea4fe254c4f33fcec84bf2b5af8dc087aa8036ac585/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-11-13 09:11:29.547000 [ 628 ] DEBUG : http://localhost:None "GET /v1.46/containers/4a932431106e01673911aea4fe254c4f33fcec84bf2b5af8dc087aa8036ac585/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-11-13 09:11:29.650000 [ 628 ] DEBUG : http://localhost:None "GET /v1.46/containers/4a932431106e01673911aea4fe254c4f33fcec84bf2b5af8dc087aa8036ac585/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-11-13 09:11:29.753000 [ 628 ] DEBUG : http://localhost:None "GET /v1.46/containers/4a932431106e01673911aea4fe254c4f33fcec84bf2b5af8dc087aa8036ac585/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-11-13 09:11:29.856000 [ 628 ] DEBUG : http://localhost:None "GET /v1.46/containers/4a932431106e01673911aea4fe254c4f33fcec84bf2b5af8dc087aa8036ac585/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-11-13 09:11:29.959000 [ 628 ] DEBUG : http://localhost:None "GET /v1.46/containers/4a932431106e01673911aea4fe254c4f33fcec84bf2b5af8dc087aa8036ac585/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-11-13 09:11:30.062000 [ 628 ] DEBUG : http://localhost:None "GET /v1.46/containers/4a932431106e01673911aea4fe254c4f33fcec84bf2b5af8dc087aa8036ac585/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-11-13 09:11:30.165000 [ 628 ] DEBUG : http://localhost:None "GET /v1.46/containers/4a932431106e01673911aea4fe254c4f33fcec84bf2b5af8dc087aa8036ac585/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-11-13 09:11:30.269000 [ 628 ] DEBUG : http://localhost:None "GET /v1.46/containers/4a932431106e01673911aea4fe254c4f33fcec84bf2b5af8dc087aa8036ac585/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-11-13 09:11:30.372000 [ 628 ] DEBUG : http://localhost:None "GET /v1.46/containers/4a932431106e01673911aea4fe254c4f33fcec84bf2b5af8dc087aa8036ac585/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-11-13 09:11:30.474000 [ 628 ] DEBUG : http://localhost:None "GET /v1.46/containers/4a932431106e01673911aea4fe254c4f33fcec84bf2b5af8dc087aa8036ac585/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-11-13 09:11:30.577000 [ 628 ] DEBUG : http://localhost:None "GET /v1.46/containers/4a932431106e01673911aea4fe254c4f33fcec84bf2b5af8dc087aa8036ac585/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-11-13 09:11:30.680000 [ 628 ] DEBUG : http://localhost:None "GET /v1.46/containers/4a932431106e01673911aea4fe254c4f33fcec84bf2b5af8dc087aa8036ac585/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-11-13 09:11:30.782000 [ 628 ] DEBUG : http://localhost:None "GET /v1.46/containers/4a932431106e01673911aea4fe254c4f33fcec84bf2b5af8dc087aa8036ac585/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-11-13 09:11:30.885000 [ 628 ] DEBUG : http://localhost:None "GET /v1.46/containers/4a932431106e01673911aea4fe254c4f33fcec84bf2b5af8dc087aa8036ac585/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-11-13 09:11:30.886000 [ 628 ] DEBUG : ClickHouse replica1 started (cluster.py:3159, start) 2025-11-13 09:11:30.887000 [ 628 ] DEBUG : get_instance_ip instance_name=replica2 (cluster.py:2005, get_instance_ip) 2025-11-13 09:11:30.889000 [ 628 ] DEBUG : http://localhost:None "GET /v1.46/containers/roottestattachpartitionusingcopy-gw0-replica2-1/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-11-13 09:11:30.890000 [ 628 ] DEBUG : get_instance_ip instance_name=replica2 (cluster.py:2015, get_instance_global_ipv6) 2025-11-13 09:11:30.891000 [ 628 ] DEBUG : http://localhost:None "GET /v1.46/containers/roottestattachpartitionusingcopy-gw0-replica2-1/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-11-13 09:11:30.892000 [ 628 ] DEBUG : Waiting for ClickHouse start in replica2, ip: 172.16.2.5... (cluster.py:3155, start) 2025-11-13 09:11:30.894000 [ 628 ] DEBUG : http://localhost:None "GET /v1.46/containers/roottestattachpartitionusingcopy-gw0-replica2-1/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-11-13 09:11:30.896000 [ 628 ] DEBUG : http://localhost:None "GET /v1.46/containers/35d5fef978c3ab7405cd49038f80842a2640d2db4b9abbc69df61199cae38886/json HTTP/1.1" 200 None (connectionpool.py:547, _make_request) 2025-11-13 09:11:30.897000 [ 628 ] DEBUG : ClickHouse replica2 started (cluster.py:3159, start) ------------------------------ Captured log call ------------------------------- 2025-11-13 09:11:30.899000 [ 628 ] DEBUG : Executing query DROP TABLE IF EXISTS source SYNC on replica1 (cluster.py:3648, query) 2025-11-13 09:11:31.165000 [ 628 ] DEBUG : Executing query DROP TABLE IF EXISTS destination SYNC on replica1 (cluster.py:3648, query) 2025-11-13 09:11:31.380000 [ 628 ] DEBUG : Executing query DROP TABLE IF EXISTS source SYNC on replica2 (cluster.py:3648, query) 2025-11-13 09:11:31.595000 [ 628 ] DEBUG : Executing query DROP TABLE IF EXISTS destination SYNC on replica2 (cluster.py:3648, query) 2025-11-13 09:11:31.861000 [ 628 ] DEBUG : Executing query ATTACH TABLE source UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7' ( price UInt32, date Date, postcode1 LowCardinality(String), postcode2 LowCardinality(String), type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4), is_new UInt8, duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2), addr1 String, addr2 String, street LowCardinality(String), locality LowCardinality(String), town LowCardinality(String), district LowCardinality(String), county LowCardinality(String) ) ENGINE = ReplicatedMergeTree('/clickhouse/tables/1/source', 'replica1') ORDER BY (postcode1, postcode2, addr1, addr2) SETTINGS disk = disk(type = web, endpoint = 'https://raw.githubusercontent.com/ClickHouse/web-tables-demo/main/web/') on replica1 (cluster.py:3648, query) 2025-11-13 09:12:25.838000 [ 628 ] DEBUG : Executing query ATTACH TABLE source UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7' ( price UInt32, date Date, postcode1 LowCardinality(String), postcode2 LowCardinality(String), type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4), is_new UInt8, duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2), addr1 String, addr2 String, street LowCardinality(String), locality LowCardinality(String), town LowCardinality(String), district LowCardinality(String), county LowCardinality(String) ) ENGINE = ReplicatedMergeTree('/clickhouse/tables/1/source', 'replica1') ORDER BY (postcode1, postcode2, addr1, addr2) SETTINGS disk = disk(type = web, endpoint = 'https://raw.githubusercontent.com/ClickHouse/web-tables-demo/main/web/') on replica1 (cluster.py:3648, query) 2025-11-13 09:13:20.361000 [ 628 ] DEBUG : Executing query ATTACH TABLE source UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7' ( price UInt32, date Date, postcode1 LowCardinality(String), postcode2 LowCardinality(String), type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4), is_new UInt8, duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2), addr1 String, addr2 String, street LowCardinality(String), locality LowCardinality(String), town LowCardinality(String), district LowCardinality(String), county LowCardinality(String) ) ENGINE = ReplicatedMergeTree('/clickhouse/tables/1/source', 'replica1') ORDER BY (postcode1, postcode2, addr1, addr2) SETTINGS disk = disk(type = web, endpoint = 'https://raw.githubusercontent.com/ClickHouse/web-tables-demo/main/web/') on replica1 (cluster.py:3648, query) ___________________ test_cow_policy[cow_policy_multi_volume] ___________________ [gw3] linux -- Python 3.10.12 /usr/bin/python3 start_cluster = storage_policy = 'cow_policy_multi_volume' @pytest.mark.parametrize("storage_policy", ["cow_policy_multi_disk", "cow_policy_multi_volume"]) def test_cow_policy(start_cluster, storage_policy): try: > node.query_with_retry( f""" ATTACH TABLE uk_price_paid UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7' ( price UInt32, date Date, postcode1 LowCardinality(String), postcode2 LowCardinality(String), type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4), is_new UInt8, duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2), addr1 String, addr2 String, street LowCardinality(String), locality LowCardinality(String), town LowCardinality(String), district LowCardinality(String), county LowCardinality(String) ) ENGINE = MergeTree ORDER BY (postcode1, postcode2, addr1, addr2) SETTINGS storage_policy = '{storage_policy}' """, timeout=60, retry_count=3, ) test_cow_policy/test.py:24: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = sql = "\n ATTACH TABLE uk_price_paid UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7'\n (\n ...BY (postcode1, postcode2, addr1, addr2)\n SETTINGS storage_policy = 'cow_policy_multi_volume'\n " stdin = None, timeout = 60, settings = None, user = None, password = None database = None, host = None, ignore_error = False, retry_count = 3 sleep_time = 0.5 check_callback = at 0x7f6180294040> parse = False def query_with_retry( self, sql, stdin=None, timeout=None, settings=None, user=None, password=None, database=None, host=None, ignore_error=False, retry_count=20, sleep_time=0.5, check_callback=lambda x: True, parse=False, ): # logging.debug(f"Executing query {sql} on {self.name}") result = None exception_msg = "" for i in range(retry_count): try: result = self.query( sql, stdin=stdin, timeout=timeout, settings=settings, user=user, password=password, database=database, host=host, ignore_error=ignore_error, parse=parse, ) if check_callback(result): return result time.sleep(sleep_time) except QueryRuntimeException as ex: exception_msg = f"{type(ex).__name__}: {str(ex)}" # Container is down, this is likely due to server crash. if "No route to host" in str(ex): raise time.sleep(sleep_time) except Exception as ex: # logging.debug("Retry {} got exception {}".format(i + 1, ex)) exception_msg = f"{type(ex).__name__}: {str(ex)}" time.sleep(sleep_time) if result is not None: return result > raise Exception(f"Can't execute query {sql}\n{exception_msg}") E Exception: Can't execute query E ATTACH TABLE uk_price_paid UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7' E ( E price UInt32, E date Date, E postcode1 LowCardinality(String), E postcode2 LowCardinality(String), E type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4), E is_new UInt8, E duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2), E addr1 String, E addr2 String, E street LowCardinality(String), E locality LowCardinality(String), E town LowCardinality(String), E district LowCardinality(String), E county LowCardinality(String) E ) E ENGINE = MergeTree E ORDER BY (postcode1, postcode2, addr1, addr2) E SETTINGS storage_policy = 'cow_policy_multi_volume' E E QueryRuntimeException: Client failed! Return code: 198, stderr: Received exception from server (version 25.3.8): E Code: 198. DB::Exception: Received from 172.16.1.2:9000. DB::NetException. DB::NetException: Not found address of host: raw.githubusercontent.com: while loading disk metadata. Stack trace: E E 0. ./contrib/llvm-project/libcxx/include/__exception/exception.h:113: Poco::Exception::Exception(String const&, int) @ 0x0000000020e10280 E 1. ./build_docker/./src/Common/Exception.cpp:108: DB::Exception::Exception(DB::Exception::MessageMasked&&, int, bool) @ 0x0000000010b395b4 E 2. ./src/Common/Exception.h:112: DB::NetException::NetException(int, FormatStringHelperImpl::type>, String const&) @ 0x0000000010af5e99 E 3. ./build_docker/./src/Common/DNSResolver.cpp:113: DB::(anonymous namespace)::hostByName(String const&) @ 0x0000000010af1803 E 4. ./build_docker/./src/Common/DNSResolver.cpp:138: DB::DNSResolver::getResolvedIPAdressessWithFiltering(String const&) @ 0x0000000010aefdd3 E 5. ./build_docker/./src/Common/DNSResolver.cpp:256: DB::DNSResolver::resolveIPAddressWithCache(String const&) @ 0x0000000010af03b9 E 6. ./build_docker/./src/Common/DNSResolver.cpp:276: DB::DNSResolver::resolveHostAllInOriginOrder(String const&) @ 0x0000000010af0af0 E 7. ./build_docker/./src/Common/HostResolvePool.cpp:54: std::vector> std::__function::__policy_invoker> (String const&)>::__call_impl[abi:ne190107]> (String const&)>>(std::__function::__policy_storage const*, String const&) @ 0x0000000010fc2c29 E 8. ./contrib/llvm-project/libcxx/include/__functional/function.h:716: ? @ 0x0000000010fc0a8f E 9. ./build_docker/./src/Common/HostResolvePool.cpp:66: DB::HostResolver::HostResolver(std::function> (String const&)>&&, String, Poco::Timespan) @ 0x0000000010fc089f E 10. ./build_docker/./src/Common/HostResolvePool.cpp:53: DB::HostResolver::HostResolver(String, Poco::Timespan) @ 0x0000000010fc053e E 11. ./src/Common/HostResolvePool.h:62: std::shared_ptr DB::HostResolver::create(String const&)::make_shared_enabler::make_shared_enabler(String const&) @ 0x0000000010fc5b23 E 12. ./contrib/llvm-project/libcxx/include/__memory/construct_at.h:41: std::shared_ptr DB::HostResolver::create(String const&)::make_shared_enabler> std::allocate_shared[abi:ne190107] DB::HostResolver::create(String const&)::make_shared_enabler, std::allocator DB::HostResolver::create(String const&)::make_shared_enabler>, String const&, 0>(std::allocator DB::HostResolver::create(String const&)::make_shared_enabler> const&, String const&) @ 0x0000000010fc5826 E 13. ./contrib/llvm-project/libcxx/include/__memory/shared_ptr.h:851: DB::HostResolversPool::getResolver(String const&) @ 0x0000000010fc2af1 E 14. ./build_docker/./src/Common/HTTPConnectionPool.cpp:671: DB::EndpointConnectionPool::prepareNewConnection(DB::ConnectionTimeouts const&, unsigned long*) @ 0x0000000010fb4fb6 E 15. ./build_docker/./src/Common/HTTPConnectionPool.cpp:590: DB::EndpointConnectionPool::getConnection(DB::ConnectionTimeouts const&, unsigned long*) @ 0x0000000010fb4218 E 16. ./build_docker/./src/IO/HTTPCommon.cpp:63: DB::makeHTTPSession(DB::HTTPConnectionGroupType, Poco::URI const&, DB::ConnectionTimeouts const&, DB::ProxyConfiguration const&, unsigned long*) @ 0x0000000010fccd2d E 17. ./build_docker/./src/IO/ReadWriteBufferFromHTTP.cpp:267: DB::ReadWriteBufferFromHTTP::callImpl(Poco::Net::HTTPResponse&, String const&, std::optional const&, bool) const @ 0x0000000013cfa745 E 18. ./build_docker/./src/IO/ReadWriteBufferFromHTTP.cpp:285: DB::ReadWriteBufferFromHTTP::callWithRedirects(Poco::Net::HTTPResponse&, String const&, std::optional const&) @ 0x0000000013cfa9f4 E 19. ./build_docker/./src/IO/ReadWriteBufferFromHTTP.cpp:408: DB::ReadWriteBufferFromHTTP::initialize() @ 0x0000000013cfb4a0 E 20. ./build_docker/./src/IO/ReadWriteBufferFromHTTP.cpp:472: void std::__function::__policy_invoker::__call_impl[abi:ne190107]>(std::__function::__policy_storage const*) @ 0x0000000013cfe657 E 21. ./contrib/llvm-project/libcxx/include/__functional/function.h:716: ? @ 0x0000000013cf7eb9 E 22. ./build_docker/./src/IO/ReadWriteBufferFromHTTP.cpp:465: DB::ReadWriteBufferFromHTTP::nextImpl() @ 0x0000000013cfcb61 E 23. DB::ReadBuffer::next() @ 0x000000000891fc7b E 24. ./src/IO/ReadBuffer.h:96: DB::WebObjectStorage::loadFiles(String const&, std::unique_lock const&) const @ 0x00000000179744d9 E 25. ./build_docker/./src/Disks/ObjectStorages/Web/WebObjectStorage.cpp:225: DB::WebObjectStorage::tryGetFileInfo(String const&) const @ 0x000000001797795a E 26. ./build_docker/./src/Disks/ObjectStorages/Web/WebObjectStorage.cpp:185: DB::WebObjectStorage::tryGetFileInfo(String const&) const @ 0x000000001797737a E 27. ./build_docker/./src/Disks/ObjectStorages/Web/MetadataStorageFromStaticFilesWebServer.cpp:106: DB::MetadataStorageFromStaticFilesWebServer::getStorageObjectsIfExist(String const&) const @ 0x0000000017971925 E 28. ./build_docker/./src/Disks/ObjectStorages/DiskObjectStorage.cpp:785: DB::DiskObjectStorage::readFileIfExists(String const&, DB::ReadSettings const&, std::optional, std::optional) const @ 0x00000000178d7f12 E 29. ./build_docker/./src/Storages/MergeTree/MergeTreeData.cpp:380: DB::MergeTreeData::initializeDirectoriesAndFormatVersion(String const&, bool, String const&, bool) @ 0x000000001b300bdd E 30. ./build_docker/./src/Storages/StorageMergeTree.cpp:159: DB::StorageMergeTree::StorageMergeTree(DB::StorageID const&, String const&, DB::StorageInMemoryMetadata const&, DB::LoadingStrictnessLevel, std::shared_ptr, String const&, DB::MergeTreeData::MergingParams const&, std::unique_ptr>) @ 0x000000001b7d5bc7 E 31. ./contrib/llvm-project/libcxx/include/__memory/construct_at.h:41: std::shared_ptr std::allocate_shared[abi:ne190107], DB::StorageID const&, String const&, DB::StorageInMemoryMetadata&, DB::LoadingStrictnessLevel const&, std::shared_ptr&, String&, DB::MergeTreeData::MergingParams&, std::unique_ptr>, 0>(std::allocator const&, DB::StorageID const&, String const&, DB::StorageInMemoryMetadata&, DB::LoadingStrictnessLevel const&, std::shared_ptr&, String&, DB::MergeTreeData::MergingParams&, std::unique_ptr>&&) @ 0x000000001b7d54e4 E . (DNS_ERROR) E (query: ATTACH TABLE uk_price_paid UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7' E ( E price UInt32, E date Date, E postcode1 LowCardinality(String), E postcode2 LowCardinality(String), E type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4), E is_new UInt8, E duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2), E addr1 String, E addr2 String, E street LowCardinality(String), E locality LowCardinality(String), E town LowCardinality(String), E district LowCardinality(String), E county LowCardinality(String) E ) E ENGINE = MergeTree E ORDER BY (postcode1, postcode2, addr1, addr2) E SETTINGS storage_policy = 'cow_policy_multi_volume' E ) helpers/cluster.py:3712: Exception ------------------------------ Captured log call ------------------------------- 2025-11-13 09:14:09.936000 [ 637 ] DEBUG : Executing query ATTACH TABLE uk_price_paid UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7' ( price UInt32, date Date, postcode1 LowCardinality(String), postcode2 LowCardinality(String), type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4), is_new UInt8, duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2), addr1 String, addr2 String, street LowCardinality(String), locality LowCardinality(String), town LowCardinality(String), district LowCardinality(String), county LowCardinality(String) ) ENGINE = MergeTree ORDER BY (postcode1, postcode2, addr1, addr2) SETTINGS storage_policy = 'cow_policy_multi_volume' on node (cluster.py:3648, query) 2025-11-13 09:15:07.019000 [ 637 ] DEBUG : Executing query ATTACH TABLE uk_price_paid UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7' ( price UInt32, date Date, postcode1 LowCardinality(String), postcode2 LowCardinality(String), type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4), is_new UInt8, duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2), addr1 String, addr2 String, street LowCardinality(String), locality LowCardinality(String), town LowCardinality(String), district LowCardinality(String), county LowCardinality(String) ) ENGINE = MergeTree ORDER BY (postcode1, postcode2, addr1, addr2) SETTINGS storage_policy = 'cow_policy_multi_volume' on node (cluster.py:3648, query) 2025-11-13 09:16:04.802000 [ 637 ] DEBUG : Executing query ATTACH TABLE uk_price_paid UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7' ( price UInt32, date Date, postcode1 LowCardinality(String), postcode2 LowCardinality(String), type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4), is_new UInt8, duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2), addr1 String, addr2 String, street LowCardinality(String), locality LowCardinality(String), town LowCardinality(String), district LowCardinality(String), county LowCardinality(String) ) ENGINE = MergeTree ORDER BY (postcode1, postcode2, addr1, addr2) SETTINGS storage_policy = 'cow_policy_multi_volume' on node (cluster.py:3648, query) 2025-11-13 09:16:59.427000 [ 637 ] DEBUG : Executing query DROP TABLE IF EXISTS uk_price_paid SYNC on node (cluster.py:3648, query) ---------------------------- Captured log teardown ----------------------------- 2025-11-13 09:16:59.746000 [ 637 ] DEBUG : Command:[docker compose --env-file /ClickHouse/tests/integration/test_cow_policy/_instances-1-gw3/.env --project-name roottestcowpolicy-gw3 --file /ClickHouse/tests/integration/test_cow_policy/_instances-1-gw3/node/docker-compose.yml stop --timeout 20] (cluster.py:121, run_and_check) 2025-11-13 09:17:05.910000 [ 637 ] DEBUG : Stderr: Container roottestcowpolicy-gw3-node-1 Stopping (cluster.py:147, run_and_check) 2025-11-13 09:17:05.910000 [ 637 ] DEBUG : Stderr: Container roottestcowpolicy-gw3-node-1 Stopped (cluster.py:147, run_and_check) 2025-11-13 09:17:05.910000 [ 637 ] DEBUG : Command:[bash -c [ -f /ClickHouse/tests/integration/test_cow_policy/_instances-1-gw3/node/logs/stderr.log ] && zgrep -aH "==================" /ClickHouse/tests/integration/test_cow_policy/_instances-1-gw3/node/logs/stderr.log* | ( [ -z "" ] && cat || grep -v "$" ) || true] (cluster.py:121, run_and_check) 2025-11-13 09:17:05.922000 [ 637 ] DEBUG : Command:[docker compose --env-file /ClickHouse/tests/integration/test_cow_policy/_instances-1-gw3/.env --project-name roottestcowpolicy-gw3 --file /ClickHouse/tests/integration/test_cow_policy/_instances-1-gw3/node/docker-compose.yml down --volumes] (cluster.py:121, run_and_check) 2025-11-13 09:17:06.424000 [ 637 ] DEBUG : Stderr: Container roottestcowpolicy-gw3-node-1 Stopping (cluster.py:147, run_and_check) 2025-11-13 09:17:06.424000 [ 637 ] DEBUG : Stderr: Container roottestcowpolicy-gw3-node-1 Stopped (cluster.py:147, run_and_check) 2025-11-13 09:17:06.424000 [ 637 ] DEBUG : Stderr: Container roottestcowpolicy-gw3-node-1 Removing (cluster.py:147, run_and_check) 2025-11-13 09:17:06.424000 [ 637 ] DEBUG : Stderr: Container roottestcowpolicy-gw3-node-1 Removed (cluster.py:147, run_and_check) 2025-11-13 09:17:06.424000 [ 637 ] DEBUG : Stderr: Network roottestcowpolicy-gw3_default Removing (cluster.py:147, run_and_check) 2025-11-13 09:17:06.424000 [ 637 ] DEBUG : Stderr: Network roottestcowpolicy-gw3_default Removed (cluster.py:147, run_and_check) 2025-11-13 09:17:06.425000 [ 637 ] DEBUG : Cleanup called (cluster.py:851, cleanup) 2025-11-13 09:17:06.443000 [ 637 ] DEBUG : Docker networks for project roottestcowpolicy-gw3 are NETWORK ID NAME DRIVER SCOPE (cluster.py:830, print_all_docker_pieces) 2025-11-13 09:17:06.459000 [ 637 ] DEBUG : Docker containers for project roottestcowpolicy-gw3 are CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES (cluster.py:838, print_all_docker_pieces) 2025-11-13 09:17:06.480000 [ 637 ] DEBUG : Docker volumes for project roottestcowpolicy-gw3 are DRIVER VOLUME NAME (cluster.py:846, print_all_docker_pieces) 2025-11-13 09:17:06.480000 [ 637 ] DEBUG : Command:[docker container list --all --filter name='^/roottestcowpolicy-gw3-.*-1$' --format '{{.ID}}:{{.Names}}'] (cluster.py:121, run_and_check) 2025-11-13 09:17:06.505000 [ 637 ] DEBUG : Unstopped containers: {} (cluster.py:865, cleanup) 2025-11-13 09:17:06.505000 [ 637 ] DEBUG : No running containers for project: roottestcowpolicy-gw3 (cluster.py:879, cleanup) 2025-11-13 09:17:06.505000 [ 637 ] DEBUG : Trying to prune unused networks... (cluster.py:885, cleanup) 2025-11-13 09:17:06.529000 [ 637 ] DEBUG : Trying to prune unused images... (cluster.py:901, cleanup) 2025-11-13 09:17:06.530000 [ 637 ] DEBUG : Command:[docker image prune -f] (cluster.py:121, run_and_check) 2025-11-13 09:17:06.558000 [ 637 ] DEBUG : Stdout:Total reclaimed space: 0B (cluster.py:145, run_and_check) 2025-11-13 09:17:06.558000 [ 637 ] DEBUG : Images pruned (cluster.py:904, cleanup) 2025-11-13 09:17:06.558000 [ 637 ] DEBUG : Trying to prune unused volumes... (cluster.py:910, cleanup) 2025-11-13 09:17:06.558000 [ 637 ] DEBUG : Command:[docker volume ls | wc -l] (cluster.py:121, run_and_check) 2025-11-13 09:17:06.583000 [ 637 ] DEBUG : Stdout:1 (cluster.py:145, run_and_check) 2025-11-13 09:17:06.583000 [ 637 ] DEBUG : Volumes pruned: 1 (cluster.py:915, cleanup) _____________________________ test_both_mergetree ______________________________ [gw0] linux -- Python 3.10.12 /usr/bin/python3 start_cluster = def test_both_mergetree(start_cluster): cleanup([replica1, replica2]) > create_source_table(replica1, "source", False) test_attach_partition_using_copy/test.py:106: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ test_attach_partition_using_copy/test.py:40: in create_source_table node.query_with_retry( _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = sql = "\n ATTACH TABLE source UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7'\n (\n price UInt32,\n ...disk = disk(type = web, endpoint = 'https://raw.githubusercontent.com/ClickHouse/web-tables-demo/main/web/')\n " stdin = None, timeout = 60, settings = None, user = None, password = None database = None, host = None, ignore_error = False, retry_count = 3 sleep_time = 0.5 check_callback = at 0x7f9ccbbdc040> parse = False def query_with_retry( self, sql, stdin=None, timeout=None, settings=None, user=None, password=None, database=None, host=None, ignore_error=False, retry_count=20, sleep_time=0.5, check_callback=lambda x: True, parse=False, ): # logging.debug(f"Executing query {sql} on {self.name}") result = None exception_msg = "" for i in range(retry_count): try: result = self.query( sql, stdin=stdin, timeout=timeout, settings=settings, user=user, password=password, database=database, host=host, ignore_error=ignore_error, parse=parse, ) if check_callback(result): return result time.sleep(sleep_time) except QueryRuntimeException as ex: exception_msg = f"{type(ex).__name__}: {str(ex)}" # Container is down, this is likely due to server crash. if "No route to host" in str(ex): raise time.sleep(sleep_time) except Exception as ex: # logging.debug("Retry {} got exception {}".format(i + 1, ex)) exception_msg = f"{type(ex).__name__}: {str(ex)}" time.sleep(sleep_time) if result is not None: return result > raise Exception(f"Can't execute query {sql}\n{exception_msg}") E Exception: Can't execute query E ATTACH TABLE source UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7' E ( E price UInt32, E date Date, E postcode1 LowCardinality(String), E postcode2 LowCardinality(String), E type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4), E is_new UInt8, E duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2), E addr1 String, E addr2 String, E street LowCardinality(String), E locality LowCardinality(String), E town LowCardinality(String), E district LowCardinality(String), E county LowCardinality(String) E ) E ENGINE = MergeTree() E ORDER BY (postcode1, postcode2, addr1, addr2) E SETTINGS disk = disk(type = web, endpoint = 'https://raw.githubusercontent.com/ClickHouse/web-tables-demo/main/web/') E E QueryRuntimeException: Client failed! Return code: 198, stderr: Received exception from server (version 25.3.8): E Code: 198. DB::Exception: Received from 172.16.2.6:9000. DB::NetException. DB::NetException: Not found address of host: raw.githubusercontent.com: while loading disk metadata. Stack trace: E E 0. ./contrib/llvm-project/libcxx/include/__exception/exception.h:113: Poco::Exception::Exception(String const&, int) @ 0x0000000020e10280 E 1. ./build_docker/./src/Common/Exception.cpp:108: DB::Exception::Exception(DB::Exception::MessageMasked&&, int, bool) @ 0x0000000010b395b4 E 2. ./src/Common/Exception.h:112: DB::NetException::NetException(int, FormatStringHelperImpl::type>, String const&) @ 0x0000000010af5e99 E 3. ./build_docker/./src/Common/DNSResolver.cpp:113: DB::(anonymous namespace)::hostByName(String const&) @ 0x0000000010af1803 E 4. ./build_docker/./src/Common/DNSResolver.cpp:138: DB::DNSResolver::getResolvedIPAdressessWithFiltering(String const&) @ 0x0000000010aefdd3 E 5. ./build_docker/./src/Common/DNSResolver.cpp:256: DB::DNSResolver::resolveIPAddressWithCache(String const&) @ 0x0000000010af03b9 E 6. ./build_docker/./src/Common/DNSResolver.cpp:276: DB::DNSResolver::resolveHostAllInOriginOrder(String const&) @ 0x0000000010af0af0 E 7. ./build_docker/./src/Common/HostResolvePool.cpp:54: std::vector> std::__function::__policy_invoker> (String const&)>::__call_impl[abi:ne190107]> (String const&)>>(std::__function::__policy_storage const*, String const&) @ 0x0000000010fc2c29 E 8. ./contrib/llvm-project/libcxx/include/__functional/function.h:716: ? @ 0x0000000010fc0a8f E 9. ./build_docker/./src/Common/HostResolvePool.cpp:66: DB::HostResolver::HostResolver(std::function> (String const&)>&&, String, Poco::Timespan) @ 0x0000000010fc089f E 10. ./build_docker/./src/Common/HostResolvePool.cpp:53: DB::HostResolver::HostResolver(String, Poco::Timespan) @ 0x0000000010fc053e E 11. ./src/Common/HostResolvePool.h:62: std::shared_ptr DB::HostResolver::create(String const&)::make_shared_enabler::make_shared_enabler(String const&) @ 0x0000000010fc5b23 E 12. ./contrib/llvm-project/libcxx/include/__memory/construct_at.h:41: std::shared_ptr DB::HostResolver::create(String const&)::make_shared_enabler> std::allocate_shared[abi:ne190107] DB::HostResolver::create(String const&)::make_shared_enabler, std::allocator DB::HostResolver::create(String const&)::make_shared_enabler>, String const&, 0>(std::allocator DB::HostResolver::create(String const&)::make_shared_enabler> const&, String const&) @ 0x0000000010fc5826 E 13. ./contrib/llvm-project/libcxx/include/__memory/shared_ptr.h:851: DB::HostResolversPool::getResolver(String const&) @ 0x0000000010fc2af1 E 14. ./build_docker/./src/Common/HTTPConnectionPool.cpp:671: DB::EndpointConnectionPool::prepareNewConnection(DB::ConnectionTimeouts const&, unsigned long*) @ 0x0000000010fb4fb6 E 15. ./build_docker/./src/Common/HTTPConnectionPool.cpp:590: DB::EndpointConnectionPool::getConnection(DB::ConnectionTimeouts const&, unsigned long*) @ 0x0000000010fb4218 E 16. ./build_docker/./src/IO/HTTPCommon.cpp:63: DB::makeHTTPSession(DB::HTTPConnectionGroupType, Poco::URI const&, DB::ConnectionTimeouts const&, DB::ProxyConfiguration const&, unsigned long*) @ 0x0000000010fccd2d E 17. ./build_docker/./src/IO/ReadWriteBufferFromHTTP.cpp:267: DB::ReadWriteBufferFromHTTP::callImpl(Poco::Net::HTTPResponse&, String const&, std::optional const&, bool) const @ 0x0000000013cfa745 E 18. ./build_docker/./src/IO/ReadWriteBufferFromHTTP.cpp:285: DB::ReadWriteBufferFromHTTP::callWithRedirects(Poco::Net::HTTPResponse&, String const&, std::optional const&) @ 0x0000000013cfa9f4 E 19. ./build_docker/./src/IO/ReadWriteBufferFromHTTP.cpp:408: DB::ReadWriteBufferFromHTTP::initialize() @ 0x0000000013cfb4a0 E 20. ./build_docker/./src/IO/ReadWriteBufferFromHTTP.cpp:472: void std::__function::__policy_invoker::__call_impl[abi:ne190107]>(std::__function::__policy_storage const*) @ 0x0000000013cfe657 E 21. ./contrib/llvm-project/libcxx/include/__functional/function.h:716: ? @ 0x0000000013cf7eb9 E 22. ./build_docker/./src/IO/ReadWriteBufferFromHTTP.cpp:465: DB::ReadWriteBufferFromHTTP::nextImpl() @ 0x0000000013cfcb61 E 23. DB::ReadBuffer::next() @ 0x000000000891fc7b E 24. ./src/IO/ReadBuffer.h:96: DB::WebObjectStorage::loadFiles(String const&, std::unique_lock const&) const @ 0x00000000179744d9 E 25. ./build_docker/./src/Disks/ObjectStorages/Web/WebObjectStorage.cpp:225: DB::WebObjectStorage::tryGetFileInfo(String const&) const @ 0x000000001797795a E 26. ./build_docker/./src/Disks/ObjectStorages/Web/WebObjectStorage.cpp:185: DB::WebObjectStorage::tryGetFileInfo(String const&) const @ 0x000000001797737a E 27. ./build_docker/./src/Disks/ObjectStorages/Web/MetadataStorageFromStaticFilesWebServer.cpp:106: DB::MetadataStorageFromStaticFilesWebServer::getStorageObjectsIfExist(String const&) const @ 0x0000000017971925 E 28. ./build_docker/./src/Disks/ObjectStorages/DiskObjectStorage.cpp:785: DB::DiskObjectStorage::readFileIfExists(String const&, DB::ReadSettings const&, std::optional, std::optional) const @ 0x00000000178d7f12 E 29. ./build_docker/./src/Storages/MergeTree/MergeTreeData.cpp:380: DB::MergeTreeData::initializeDirectoriesAndFormatVersion(String const&, bool, String const&, bool) @ 0x000000001b300bdd E 30. ./build_docker/./src/Storages/StorageMergeTree.cpp:159: DB::StorageMergeTree::StorageMergeTree(DB::StorageID const&, String const&, DB::StorageInMemoryMetadata const&, DB::LoadingStrictnessLevel, std::shared_ptr, String const&, DB::MergeTreeData::MergingParams const&, std::unique_ptr>) @ 0x000000001b7d5bc7 E 31. ./contrib/llvm-project/libcxx/include/__memory/construct_at.h:41: std::shared_ptr std::allocate_shared[abi:ne190107], DB::StorageID const&, String const&, DB::StorageInMemoryMetadata&, DB::LoadingStrictnessLevel const&, std::shared_ptr&, String&, DB::MergeTreeData::MergingParams&, std::unique_ptr>, 0>(std::allocator const&, DB::StorageID const&, String const&, DB::StorageInMemoryMetadata&, DB::LoadingStrictnessLevel const&, std::shared_ptr&, String&, DB::MergeTreeData::MergingParams&, std::unique_ptr>&&) @ 0x000000001b7d54e4 E . (DNS_ERROR) E (query: ATTACH TABLE source UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7' E ( E price UInt32, E date Date, E postcode1 LowCardinality(String), E postcode2 LowCardinality(String), E type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4), E is_new UInt8, E duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2), E addr1 String, E addr2 String, E street LowCardinality(String), E locality LowCardinality(String), E town LowCardinality(String), E district LowCardinality(String), E county LowCardinality(String) E ) E ENGINE = MergeTree() E ORDER BY (postcode1, postcode2, addr1, addr2) E SETTINGS disk = disk(type = web, endpoint = 'https://raw.githubusercontent.com/ClickHouse/web-tables-demo/main/web/') E ) helpers/cluster.py:3712: Exception ------------------------------ Captured log call ------------------------------- 2025-11-13 09:14:15.787000 [ 628 ] DEBUG : Executing query DROP TABLE IF EXISTS source SYNC on replica1 (cluster.py:3648, query) 2025-11-13 09:14:16.002000 [ 628 ] DEBUG : Executing query DROP TABLE IF EXISTS destination SYNC on replica1 (cluster.py:3648, query) 2025-11-13 09:14:16.218000 [ 628 ] DEBUG : Executing query DROP TABLE IF EXISTS source SYNC on replica2 (cluster.py:3648, query) 2025-11-13 09:14:16.434000 [ 628 ] DEBUG : Executing query DROP TABLE IF EXISTS destination SYNC on replica2 (cluster.py:3648, query) 2025-11-13 09:14:16.651000 [ 628 ] DEBUG : Executing query ATTACH TABLE source UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7' ( price UInt32, date Date, postcode1 LowCardinality(String), postcode2 LowCardinality(String), type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4), is_new UInt8, duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2), addr1 String, addr2 String, street LowCardinality(String), locality LowCardinality(String), town LowCardinality(String), district LowCardinality(String), county LowCardinality(String) ) ENGINE = MergeTree() ORDER BY (postcode1, postcode2, addr1, addr2) SETTINGS disk = disk(type = web, endpoint = 'https://raw.githubusercontent.com/ClickHouse/web-tables-demo/main/web/') on replica1 (cluster.py:3648, query) 2025-11-13 09:15:12.633000 [ 628 ] DEBUG : Executing query ATTACH TABLE source UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7' ( price UInt32, date Date, postcode1 LowCardinality(String), postcode2 LowCardinality(String), type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4), is_new UInt8, duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2), addr1 String, addr2 String, street LowCardinality(String), locality LowCardinality(String), town LowCardinality(String), district LowCardinality(String), county LowCardinality(String) ) ENGINE = MergeTree() ORDER BY (postcode1, postcode2, addr1, addr2) SETTINGS disk = disk(type = web, endpoint = 'https://raw.githubusercontent.com/ClickHouse/web-tables-demo/main/web/') on replica1 (cluster.py:3648, query) 2025-11-13 09:16:10.417000 [ 628 ] DEBUG : Executing query ATTACH TABLE source UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7' ( price UInt32, date Date, postcode1 LowCardinality(String), postcode2 LowCardinality(String), type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4), is_new UInt8, duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2), addr1 String, addr2 String, street LowCardinality(String), locality LowCardinality(String), town LowCardinality(String), district LowCardinality(String), county LowCardinality(String) ) ENGINE = MergeTree() ORDER BY (postcode1, postcode2, addr1, addr2) SETTINGS disk = disk(type = web, endpoint = 'https://raw.githubusercontent.com/ClickHouse/web-tables-demo/main/web/') on replica1 (cluster.py:3648, query) _______________________ test_not_work_on_different_disk ________________________ [gw0] linux -- Python 3.10.12 /usr/bin/python3 start_cluster = def test_not_work_on_different_disk(start_cluster): cleanup([replica1, replica2]) # Replace and move should not work on replace > create_source_table(replica1, "source", False) test_attach_partition_using_copy/test.py:199: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ test_attach_partition_using_copy/test.py:40: in create_source_table node.query_with_retry( _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = sql = "\n ATTACH TABLE source UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7'\n (\n price UInt32,\n ...disk = disk(type = web, endpoint = 'https://raw.githubusercontent.com/ClickHouse/web-tables-demo/main/web/')\n " stdin = None, timeout = 60, settings = None, user = None, password = None database = None, host = None, ignore_error = False, retry_count = 3 sleep_time = 0.5 check_callback = at 0x7f9ccbbdc040> parse = False def query_with_retry( self, sql, stdin=None, timeout=None, settings=None, user=None, password=None, database=None, host=None, ignore_error=False, retry_count=20, sleep_time=0.5, check_callback=lambda x: True, parse=False, ): # logging.debug(f"Executing query {sql} on {self.name}") result = None exception_msg = "" for i in range(retry_count): try: result = self.query( sql, stdin=stdin, timeout=timeout, settings=settings, user=user, password=password, database=database, host=host, ignore_error=ignore_error, parse=parse, ) if check_callback(result): return result time.sleep(sleep_time) except QueryRuntimeException as ex: exception_msg = f"{type(ex).__name__}: {str(ex)}" # Container is down, this is likely due to server crash. if "No route to host" in str(ex): raise time.sleep(sleep_time) except Exception as ex: # logging.debug("Retry {} got exception {}".format(i + 1, ex)) exception_msg = f"{type(ex).__name__}: {str(ex)}" time.sleep(sleep_time) if result is not None: return result > raise Exception(f"Can't execute query {sql}\n{exception_msg}") E Exception: Can't execute query E ATTACH TABLE source UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7' E ( E price UInt32, E date Date, E postcode1 LowCardinality(String), E postcode2 LowCardinality(String), E type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4), E is_new UInt8, E duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2), E addr1 String, E addr2 String, E street LowCardinality(String), E locality LowCardinality(String), E town LowCardinality(String), E district LowCardinality(String), E county LowCardinality(String) E ) E ENGINE = MergeTree() E ORDER BY (postcode1, postcode2, addr1, addr2) E SETTINGS disk = disk(type = web, endpoint = 'https://raw.githubusercontent.com/ClickHouse/web-tables-demo/main/web/') E E QueryRuntimeException: Client failed! Return code: 198, stderr: Received exception from server (version 25.3.8): E Code: 198. DB::Exception: Received from 172.16.2.6:9000. DB::NetException. DB::NetException: Not found address of host: raw.githubusercontent.com: while loading disk metadata. Stack trace: E E 0. ./contrib/llvm-project/libcxx/include/__exception/exception.h:113: Poco::Exception::Exception(String const&, int) @ 0x0000000020e10280 E 1. ./build_docker/./src/Common/Exception.cpp:108: DB::Exception::Exception(DB::Exception::MessageMasked&&, int, bool) @ 0x0000000010b395b4 E 2. ./src/Common/Exception.h:112: DB::NetException::NetException(int, FormatStringHelperImpl::type>, String const&) @ 0x0000000010af5e99 E 3. ./build_docker/./src/Common/DNSResolver.cpp:113: DB::(anonymous namespace)::hostByName(String const&) @ 0x0000000010af1803 E 4. ./build_docker/./src/Common/DNSResolver.cpp:138: DB::DNSResolver::getResolvedIPAdressessWithFiltering(String const&) @ 0x0000000010aefdd3 E 5. ./build_docker/./src/Common/DNSResolver.cpp:256: DB::DNSResolver::resolveIPAddressWithCache(String const&) @ 0x0000000010af03b9 E 6. ./build_docker/./src/Common/DNSResolver.cpp:276: DB::DNSResolver::resolveHostAllInOriginOrder(String const&) @ 0x0000000010af0af0 E 7. ./build_docker/./src/Common/HostResolvePool.cpp:54: std::vector> std::__function::__policy_invoker> (String const&)>::__call_impl[abi:ne190107]> (String const&)>>(std::__function::__policy_storage const*, String const&) @ 0x0000000010fc2c29 E 8. ./contrib/llvm-project/libcxx/include/__functional/function.h:716: ? @ 0x0000000010fc0a8f E 9. ./build_docker/./src/Common/HostResolvePool.cpp:66: DB::HostResolver::HostResolver(std::function> (String const&)>&&, String, Poco::Timespan) @ 0x0000000010fc089f E 10. ./build_docker/./src/Common/HostResolvePool.cpp:53: DB::HostResolver::HostResolver(String, Poco::Timespan) @ 0x0000000010fc053e E 11. ./src/Common/HostResolvePool.h:62: std::shared_ptr DB::HostResolver::create(String const&)::make_shared_enabler::make_shared_enabler(String const&) @ 0x0000000010fc5b23 E 12. ./contrib/llvm-project/libcxx/include/__memory/construct_at.h:41: std::shared_ptr DB::HostResolver::create(String const&)::make_shared_enabler> std::allocate_shared[abi:ne190107] DB::HostResolver::create(String const&)::make_shared_enabler, std::allocator DB::HostResolver::create(String const&)::make_shared_enabler>, String const&, 0>(std::allocator DB::HostResolver::create(String const&)::make_shared_enabler> const&, String const&) @ 0x0000000010fc5826 E 13. ./contrib/llvm-project/libcxx/include/__memory/shared_ptr.h:851: DB::HostResolversPool::getResolver(String const&) @ 0x0000000010fc2af1 E 14. ./build_docker/./src/Common/HTTPConnectionPool.cpp:671: DB::EndpointConnectionPool::prepareNewConnection(DB::ConnectionTimeouts const&, unsigned long*) @ 0x0000000010fb4fb6 E 15. ./build_docker/./src/Common/HTTPConnectionPool.cpp:590: DB::EndpointConnectionPool::getConnection(DB::ConnectionTimeouts const&, unsigned long*) @ 0x0000000010fb4218 E 16. ./build_docker/./src/IO/HTTPCommon.cpp:63: DB::makeHTTPSession(DB::HTTPConnectionGroupType, Poco::URI const&, DB::ConnectionTimeouts const&, DB::ProxyConfiguration const&, unsigned long*) @ 0x0000000010fccd2d E 17. ./build_docker/./src/IO/ReadWriteBufferFromHTTP.cpp:267: DB::ReadWriteBufferFromHTTP::callImpl(Poco::Net::HTTPResponse&, String const&, std::optional const&, bool) const @ 0x0000000013cfa745 E 18. ./build_docker/./src/IO/ReadWriteBufferFromHTTP.cpp:285: DB::ReadWriteBufferFromHTTP::callWithRedirects(Poco::Net::HTTPResponse&, String const&, std::optional const&) @ 0x0000000013cfa9f4 E 19. ./build_docker/./src/IO/ReadWriteBufferFromHTTP.cpp:408: DB::ReadWriteBufferFromHTTP::initialize() @ 0x0000000013cfb4a0 E 20. ./build_docker/./src/IO/ReadWriteBufferFromHTTP.cpp:472: void std::__function::__policy_invoker::__call_impl[abi:ne190107]>(std::__function::__policy_storage const*) @ 0x0000000013cfe657 E 21. ./contrib/llvm-project/libcxx/include/__functional/function.h:716: ? @ 0x0000000013cf7eb9 E 22. ./build_docker/./src/IO/ReadWriteBufferFromHTTP.cpp:465: DB::ReadWriteBufferFromHTTP::nextImpl() @ 0x0000000013cfcb61 E 23. DB::ReadBuffer::next() @ 0x000000000891fc7b E 24. ./src/IO/ReadBuffer.h:96: DB::WebObjectStorage::loadFiles(String const&, std::unique_lock const&) const @ 0x00000000179744d9 E 25. ./build_docker/./src/Disks/ObjectStorages/Web/WebObjectStorage.cpp:225: DB::WebObjectStorage::tryGetFileInfo(String const&) const @ 0x000000001797795a E 26. ./build_docker/./src/Disks/ObjectStorages/Web/WebObjectStorage.cpp:185: DB::WebObjectStorage::tryGetFileInfo(String const&) const @ 0x000000001797737a E 27. ./build_docker/./src/Disks/ObjectStorages/Web/MetadataStorageFromStaticFilesWebServer.cpp:106: DB::MetadataStorageFromStaticFilesWebServer::getStorageObjectsIfExist(String const&) const @ 0x0000000017971925 E 28. ./build_docker/./src/Disks/ObjectStorages/DiskObjectStorage.cpp:785: DB::DiskObjectStorage::readFileIfExists(String const&, DB::ReadSettings const&, std::optional, std::optional) const @ 0x00000000178d7f12 E 29. ./build_docker/./src/Storages/MergeTree/MergeTreeData.cpp:380: DB::MergeTreeData::initializeDirectoriesAndFormatVersion(String const&, bool, String const&, bool) @ 0x000000001b300bdd E 30. ./build_docker/./src/Storages/StorageMergeTree.cpp:159: DB::StorageMergeTree::StorageMergeTree(DB::StorageID const&, String const&, DB::StorageInMemoryMetadata const&, DB::LoadingStrictnessLevel, std::shared_ptr, String const&, DB::MergeTreeData::MergingParams const&, std::unique_ptr>) @ 0x000000001b7d5bc7 E 31. ./contrib/llvm-project/libcxx/include/__memory/construct_at.h:41: std::shared_ptr std::allocate_shared[abi:ne190107], DB::StorageID const&, String const&, DB::StorageInMemoryMetadata&, DB::LoadingStrictnessLevel const&, std::shared_ptr&, String&, DB::MergeTreeData::MergingParams&, std::unique_ptr>, 0>(std::allocator const&, DB::StorageID const&, String const&, DB::StorageInMemoryMetadata&, DB::LoadingStrictnessLevel const&, std::shared_ptr&, String&, DB::MergeTreeData::MergingParams&, std::unique_ptr>&&) @ 0x000000001b7d54e4 E . (DNS_ERROR) E (query: ATTACH TABLE source UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7' E ( E price UInt32, E date Date, E postcode1 LowCardinality(String), E postcode2 LowCardinality(String), E type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4), E is_new UInt8, E duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2), E addr1 String, E addr2 String, E street LowCardinality(String), E locality LowCardinality(String), E town LowCardinality(String), E district LowCardinality(String), E county LowCardinality(String) E ) E ENGINE = MergeTree() E ORDER BY (postcode1, postcode2, addr1, addr2) E SETTINGS disk = disk(type = web, endpoint = 'https://raw.githubusercontent.com/ClickHouse/web-tables-demo/main/web/') E ) helpers/cluster.py:3712: Exception ------------------------------ Captured log call ------------------------------- 2025-11-13 09:17:05.133000 [ 628 ] DEBUG : Executing query DROP TABLE IF EXISTS source SYNC on replica1 (cluster.py:3648, query) 2025-11-13 09:17:05.399000 [ 628 ] DEBUG : Executing query DROP TABLE IF EXISTS destination SYNC on replica1 (cluster.py:3648, query) 2025-11-13 09:17:05.666000 [ 628 ] DEBUG : Executing query DROP TABLE IF EXISTS source SYNC on replica2 (cluster.py:3648, query) 2025-11-13 09:17:05.932000 [ 628 ] DEBUG : Executing query DROP TABLE IF EXISTS destination SYNC on replica2 (cluster.py:3648, query) 2025-11-13 09:17:06.198000 [ 628 ] DEBUG : Executing query ATTACH TABLE source UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7' ( price UInt32, date Date, postcode1 LowCardinality(String), postcode2 LowCardinality(String), type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4), is_new UInt8, duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2), addr1 String, addr2 String, street LowCardinality(String), locality LowCardinality(String), town LowCardinality(String), district LowCardinality(String), county LowCardinality(String) ) ENGINE = MergeTree() ORDER BY (postcode1, postcode2, addr1, addr2) SETTINGS disk = disk(type = web, endpoint = 'https://raw.githubusercontent.com/ClickHouse/web-tables-demo/main/web/') on replica1 (cluster.py:3648, query) 2025-11-13 09:18:00.876000 [ 628 ] DEBUG : Executing query ATTACH TABLE source UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7' ( price UInt32, date Date, postcode1 LowCardinality(String), postcode2 LowCardinality(String), type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4), is_new UInt8, duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2), addr1 String, addr2 String, street LowCardinality(String), locality LowCardinality(String), town LowCardinality(String), district LowCardinality(String), county LowCardinality(String) ) ENGINE = MergeTree() ORDER BY (postcode1, postcode2, addr1, addr2) SETTINGS disk = disk(type = web, endpoint = 'https://raw.githubusercontent.com/ClickHouse/web-tables-demo/main/web/') on replica1 (cluster.py:3648, query) 2025-11-13 09:18:57.926000 [ 628 ] DEBUG : Executing query ATTACH TABLE source UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7' ( price UInt32, date Date, postcode1 LowCardinality(String), postcode2 LowCardinality(String), type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4), is_new UInt8, duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2), addr1 String, addr2 String, street LowCardinality(String), locality LowCardinality(String), town LowCardinality(String), district LowCardinality(String), county LowCardinality(String) ) ENGINE = MergeTree() ORDER BY (postcode1, postcode2, addr1, addr2) SETTINGS disk = disk(type = web, endpoint = 'https://raw.githubusercontent.com/ClickHouse/web-tables-demo/main/web/') on replica1 (cluster.py:3648, query) _______________________ test_only_destination_replicated _______________________ [gw0] linux -- Python 3.10.12 /usr/bin/python3 start_cluster = def test_only_destination_replicated(start_cluster): cleanup([replica1, replica2]) > create_source_table(replica1, "source", False) test_attach_partition_using_copy/test.py:163: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ test_attach_partition_using_copy/test.py:40: in create_source_table node.query_with_retry( _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = sql = "\n ATTACH TABLE source UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7'\n (\n price UInt32,\n ...disk = disk(type = web, endpoint = 'https://raw.githubusercontent.com/ClickHouse/web-tables-demo/main/web/')\n " stdin = None, timeout = 60, settings = None, user = None, password = None database = None, host = None, ignore_error = False, retry_count = 3 sleep_time = 0.5 check_callback = at 0x7f9ccbbdc040> parse = False def query_with_retry( self, sql, stdin=None, timeout=None, settings=None, user=None, password=None, database=None, host=None, ignore_error=False, retry_count=20, sleep_time=0.5, check_callback=lambda x: True, parse=False, ): # logging.debug(f"Executing query {sql} on {self.name}") result = None exception_msg = "" for i in range(retry_count): try: result = self.query( sql, stdin=stdin, timeout=timeout, settings=settings, user=user, password=password, database=database, host=host, ignore_error=ignore_error, parse=parse, ) if check_callback(result): return result time.sleep(sleep_time) except QueryRuntimeException as ex: exception_msg = f"{type(ex).__name__}: {str(ex)}" # Container is down, this is likely due to server crash. if "No route to host" in str(ex): raise time.sleep(sleep_time) except Exception as ex: # logging.debug("Retry {} got exception {}".format(i + 1, ex)) exception_msg = f"{type(ex).__name__}: {str(ex)}" time.sleep(sleep_time) if result is not None: return result > raise Exception(f"Can't execute query {sql}\n{exception_msg}") E Exception: Can't execute query E ATTACH TABLE source UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7' E ( E price UInt32, E date Date, E postcode1 LowCardinality(String), E postcode2 LowCardinality(String), E type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4), E is_new UInt8, E duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2), E addr1 String, E addr2 String, E street LowCardinality(String), E locality LowCardinality(String), E town LowCardinality(String), E district LowCardinality(String), E county LowCardinality(String) E ) E ENGINE = MergeTree() E ORDER BY (postcode1, postcode2, addr1, addr2) E SETTINGS disk = disk(type = web, endpoint = 'https://raw.githubusercontent.com/ClickHouse/web-tables-demo/main/web/') E E QueryRuntimeException: Client failed! Return code: 198, stderr: Received exception from server (version 25.3.8): E Code: 198. DB::Exception: Received from 172.16.2.6:9000. DB::NetException. DB::NetException: Not found address of host: raw.githubusercontent.com: while loading disk metadata. Stack trace: E E 0. ./contrib/llvm-project/libcxx/include/__exception/exception.h:113: Poco::Exception::Exception(String const&, int) @ 0x0000000020e10280 E 1. ./build_docker/./src/Common/Exception.cpp:108: DB::Exception::Exception(DB::Exception::MessageMasked&&, int, bool) @ 0x0000000010b395b4 E 2. ./src/Common/Exception.h:112: DB::NetException::NetException(int, FormatStringHelperImpl::type>, String const&) @ 0x0000000010af5e99 E 3. ./build_docker/./src/Common/DNSResolver.cpp:113: DB::(anonymous namespace)::hostByName(String const&) @ 0x0000000010af1803 E 4. ./build_docker/./src/Common/DNSResolver.cpp:138: DB::DNSResolver::getResolvedIPAdressessWithFiltering(String const&) @ 0x0000000010aefdd3 E 5. ./build_docker/./src/Common/DNSResolver.cpp:256: DB::DNSResolver::resolveIPAddressWithCache(String const&) @ 0x0000000010af03b9 E 6. ./build_docker/./src/Common/DNSResolver.cpp:276: DB::DNSResolver::resolveHostAllInOriginOrder(String const&) @ 0x0000000010af0af0 E 7. ./build_docker/./src/Common/HostResolvePool.cpp:54: std::vector> std::__function::__policy_invoker> (String const&)>::__call_impl[abi:ne190107]> (String const&)>>(std::__function::__policy_storage const*, String const&) @ 0x0000000010fc2c29 E 8. ./contrib/llvm-project/libcxx/include/__functional/function.h:716: ? @ 0x0000000010fc0a8f E 9. ./build_docker/./src/Common/HostResolvePool.cpp:66: DB::HostResolver::HostResolver(std::function> (String const&)>&&, String, Poco::Timespan) @ 0x0000000010fc089f E 10. ./build_docker/./src/Common/HostResolvePool.cpp:53: DB::HostResolver::HostResolver(String, Poco::Timespan) @ 0x0000000010fc053e E 11. ./src/Common/HostResolvePool.h:62: std::shared_ptr DB::HostResolver::create(String const&)::make_shared_enabler::make_shared_enabler(String const&) @ 0x0000000010fc5b23 E 12. ./contrib/llvm-project/libcxx/include/__memory/construct_at.h:41: std::shared_ptr DB::HostResolver::create(String const&)::make_shared_enabler> std::allocate_shared[abi:ne190107] DB::HostResolver::create(String const&)::make_shared_enabler, std::allocator DB::HostResolver::create(String const&)::make_shared_enabler>, String const&, 0>(std::allocator DB::HostResolver::create(String const&)::make_shared_enabler> const&, String const&) @ 0x0000000010fc5826 E 13. ./contrib/llvm-project/libcxx/include/__memory/shared_ptr.h:851: DB::HostResolversPool::getResolver(String const&) @ 0x0000000010fc2af1 E 14. ./build_docker/./src/Common/HTTPConnectionPool.cpp:671: DB::EndpointConnectionPool::prepareNewConnection(DB::ConnectionTimeouts const&, unsigned long*) @ 0x0000000010fb4fb6 E 15. ./build_docker/./src/Common/HTTPConnectionPool.cpp:590: DB::EndpointConnectionPool::getConnection(DB::ConnectionTimeouts const&, unsigned long*) @ 0x0000000010fb4218 E 16. ./build_docker/./src/IO/HTTPCommon.cpp:63: DB::makeHTTPSession(DB::HTTPConnectionGroupType, Poco::URI const&, DB::ConnectionTimeouts const&, DB::ProxyConfiguration const&, unsigned long*) @ 0x0000000010fccd2d E 17. ./build_docker/./src/IO/ReadWriteBufferFromHTTP.cpp:267: DB::ReadWriteBufferFromHTTP::callImpl(Poco::Net::HTTPResponse&, String const&, std::optional const&, bool) const @ 0x0000000013cfa745 E 18. ./build_docker/./src/IO/ReadWriteBufferFromHTTP.cpp:285: DB::ReadWriteBufferFromHTTP::callWithRedirects(Poco::Net::HTTPResponse&, String const&, std::optional const&) @ 0x0000000013cfa9f4 E 19. ./build_docker/./src/IO/ReadWriteBufferFromHTTP.cpp:408: DB::ReadWriteBufferFromHTTP::initialize() @ 0x0000000013cfb4a0 E 20. ./build_docker/./src/IO/ReadWriteBufferFromHTTP.cpp:472: void std::__function::__policy_invoker::__call_impl[abi:ne190107]>(std::__function::__policy_storage const*) @ 0x0000000013cfe657 E 21. ./contrib/llvm-project/libcxx/include/__functional/function.h:716: ? @ 0x0000000013cf7eb9 E 22. ./build_docker/./src/IO/ReadWriteBufferFromHTTP.cpp:465: DB::ReadWriteBufferFromHTTP::nextImpl() @ 0x0000000013cfcb61 E 23. DB::ReadBuffer::next() @ 0x000000000891fc7b E 24. ./src/IO/ReadBuffer.h:96: DB::WebObjectStorage::loadFiles(String const&, std::unique_lock const&) const @ 0x00000000179744d9 E 25. ./build_docker/./src/Disks/ObjectStorages/Web/WebObjectStorage.cpp:225: DB::WebObjectStorage::tryGetFileInfo(String const&) const @ 0x000000001797795a E 26. ./build_docker/./src/Disks/ObjectStorages/Web/WebObjectStorage.cpp:185: DB::WebObjectStorage::tryGetFileInfo(String const&) const @ 0x000000001797737a E 27. ./build_docker/./src/Disks/ObjectStorages/Web/MetadataStorageFromStaticFilesWebServer.cpp:106: DB::MetadataStorageFromStaticFilesWebServer::getStorageObjectsIfExist(String const&) const @ 0x0000000017971925 E 28. ./build_docker/./src/Disks/ObjectStorages/DiskObjectStorage.cpp:785: DB::DiskObjectStorage::readFileIfExists(String const&, DB::ReadSettings const&, std::optional, std::optional) const @ 0x00000000178d7f12 E 29. ./build_docker/./src/Storages/MergeTree/MergeTreeData.cpp:380: DB::MergeTreeData::initializeDirectoriesAndFormatVersion(String const&, bool, String const&, bool) @ 0x000000001b300bdd E 30. ./build_docker/./src/Storages/StorageMergeTree.cpp:159: DB::StorageMergeTree::StorageMergeTree(DB::StorageID const&, String const&, DB::StorageInMemoryMetadata const&, DB::LoadingStrictnessLevel, std::shared_ptr, String const&, DB::MergeTreeData::MergingParams const&, std::unique_ptr>) @ 0x000000001b7d5bc7 E 31. ./contrib/llvm-project/libcxx/include/__memory/construct_at.h:41: std::shared_ptr std::allocate_shared[abi:ne190107], DB::StorageID const&, String const&, DB::StorageInMemoryMetadata&, DB::LoadingStrictnessLevel const&, std::shared_ptr&, String&, DB::MergeTreeData::MergingParams&, std::unique_ptr>, 0>(std::allocator const&, DB::StorageID const&, String const&, DB::StorageInMemoryMetadata&, DB::LoadingStrictnessLevel const&, std::shared_ptr&, String&, DB::MergeTreeData::MergingParams&, std::unique_ptr>&&) @ 0x000000001b7d54e4 E . (DNS_ERROR) E (query: ATTACH TABLE source UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7' E ( E price UInt32, E date Date, E postcode1 LowCardinality(String), E postcode2 LowCardinality(String), E type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4), E is_new UInt8, E duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2), E addr1 String, E addr2 String, E street LowCardinality(String), E locality LowCardinality(String), E town LowCardinality(String), E district LowCardinality(String), E county LowCardinality(String) E ) E ENGINE = MergeTree() E ORDER BY (postcode1, postcode2, addr1, addr2) E SETTINGS disk = disk(type = web, endpoint = 'https://raw.githubusercontent.com/ClickHouse/web-tables-demo/main/web/') E ) helpers/cluster.py:3712: Exception ------------------------------ Captured log call ------------------------------- 2025-11-13 09:19:55.788000 [ 628 ] DEBUG : Executing query DROP TABLE IF EXISTS source SYNC on replica1 (cluster.py:3648, query) 2025-11-13 09:19:56.004000 [ 628 ] DEBUG : Executing query DROP TABLE IF EXISTS destination SYNC on replica1 (cluster.py:3648, query) 2025-11-13 09:19:56.219000 [ 628 ] DEBUG : Executing query DROP TABLE IF EXISTS source SYNC on replica2 (cluster.py:3648, query) 2025-11-13 09:19:56.486000 [ 628 ] DEBUG : Executing query DROP TABLE IF EXISTS destination SYNC on replica2 (cluster.py:3648, query) 2025-11-13 09:19:56.703000 [ 628 ] DEBUG : Executing query ATTACH TABLE source UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7' ( price UInt32, date Date, postcode1 LowCardinality(String), postcode2 LowCardinality(String), type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4), is_new UInt8, duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2), addr1 String, addr2 String, street LowCardinality(String), locality LowCardinality(String), town LowCardinality(String), district LowCardinality(String), county LowCardinality(String) ) ENGINE = MergeTree() ORDER BY (postcode1, postcode2, addr1, addr2) SETTINGS disk = disk(type = web, endpoint = 'https://raw.githubusercontent.com/ClickHouse/web-tables-demo/main/web/') on replica1 (cluster.py:3648, query) 2025-11-13 09:20:50.583000 [ 628 ] DEBUG : Executing query ATTACH TABLE source UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7' ( price UInt32, date Date, postcode1 LowCardinality(String), postcode2 LowCardinality(String), type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4), is_new UInt8, duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2), addr1 String, addr2 String, street LowCardinality(String), locality LowCardinality(String), town LowCardinality(String), district LowCardinality(String), county LowCardinality(String) ) ENGINE = MergeTree() ORDER BY (postcode1, postcode2, addr1, addr2) SETTINGS disk = disk(type = web, endpoint = 'https://raw.githubusercontent.com/ClickHouse/web-tables-demo/main/web/') on replica1 (cluster.py:3648, query) 2025-11-13 09:21:45.814000 [ 628 ] DEBUG : Executing query ATTACH TABLE source UUID 'cf712b4f-2ca8-435c-ac23-c4393efe52f7' ( price UInt32, date Date, postcode1 LowCardinality(String), postcode2 LowCardinality(String), type Enum8('other' = 0, 'terraced' = 1, 'semi-detached' = 2, 'detached' = 3, 'flat' = 4), is_new UInt8, duration Enum8('unknown' = 0, 'freehold' = 1, 'leasehold' = 2), addr1 String, addr2 String, street LowCardinality(String), locality LowCardinality(String), town LowCardinality(String), district LowCardinality(String), county LowCardinality(String) ) ENGINE = MergeTree() ORDER BY (postcode1, postcode2, addr1, addr2) SETTINGS disk = disk(type = web, endpoint = 'https://raw.githubusercontent.com/ClickHouse/web-tables-demo/main/web/') on replica1 (cluster.py:3648, query) ---------------------------- Captured log teardown ----------------------------- 2025-11-13 09:22:43.022000 [ 628 ] DEBUG : Command:[docker compose --env-file /ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-1-gw0/.env --project-name roottestattachpartitionusingcopy-gw0 --file /ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-1-gw0/replica1/docker-compose.yml --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_keeper.yml --file /ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-1-gw0/replica2/docker-compose.yml stop --timeout 20] (cluster.py:121, run_and_check) 2025-11-13 09:22:50.220000 [ 628 ] DEBUG : Stderr: Container roottestattachpartitionusingcopy-gw0-replica1-1 Stopping (cluster.py:147, run_and_check) 2025-11-13 09:22:50.221000 [ 628 ] DEBUG : Stderr: Container roottestattachpartitionusingcopy-gw0-replica2-1 Stopping (cluster.py:147, run_and_check) 2025-11-13 09:22:50.221000 [ 628 ] DEBUG : Stderr: Container roottestattachpartitionusingcopy-gw0-replica1-1 Stopped (cluster.py:147, run_and_check) 2025-11-13 09:22:50.221000 [ 628 ] DEBUG : Stderr: Container roottestattachpartitionusingcopy-gw0-replica2-1 Stopped (cluster.py:147, run_and_check) 2025-11-13 09:22:50.221000 [ 628 ] DEBUG : Stderr: Container roottestattachpartitionusingcopy-gw0-zoo1-1 Stopping (cluster.py:147, run_and_check) 2025-11-13 09:22:50.221000 [ 628 ] DEBUG : Stderr: Container roottestattachpartitionusingcopy-gw0-zoo2-1 Stopping (cluster.py:147, run_and_check) 2025-11-13 09:22:50.221000 [ 628 ] DEBUG : Stderr: Container roottestattachpartitionusingcopy-gw0-zoo3-1 Stopping (cluster.py:147, run_and_check) 2025-11-13 09:22:50.221000 [ 628 ] DEBUG : Stderr: Container roottestattachpartitionusingcopy-gw0-zoo2-1 Stopped (cluster.py:147, run_and_check) 2025-11-13 09:22:50.221000 [ 628 ] DEBUG : Stderr: Container roottestattachpartitionusingcopy-gw0-zoo3-1 Stopped (cluster.py:147, run_and_check) 2025-11-13 09:22:50.221000 [ 628 ] DEBUG : Stderr: Container roottestattachpartitionusingcopy-gw0-zoo1-1 Stopped (cluster.py:147, run_and_check) 2025-11-13 09:22:50.221000 [ 628 ] DEBUG : Command:[bash -c [ -f /ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-1-gw0/replica1/logs/stderr.log ] && zgrep -aH "==================" /ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-1-gw0/replica1/logs/stderr.log* | ( [ -z "" ] && cat || grep -v "$" ) || true] (cluster.py:121, run_and_check) 2025-11-13 09:22:50.233000 [ 628 ] DEBUG : Command:[bash -c [ -f /ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-1-gw0/replica2/logs/stderr.log ] && zgrep -aH "==================" /ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-1-gw0/replica2/logs/stderr.log* | ( [ -z "" ] && cat || grep -v "$" ) || true] (cluster.py:121, run_and_check) 2025-11-13 09:22:50.244000 [ 628 ] DEBUG : Command:[docker compose --env-file /ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-1-gw0/.env --project-name roottestattachpartitionusingcopy-gw0 --file /ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-1-gw0/replica1/docker-compose.yml --file /ClickHouse/tests/integration/helpers/../../../tests/integration/compose/docker_compose_keeper.yml --file /ClickHouse/tests/integration/test_attach_partition_using_copy/_instances-1-gw0/replica2/docker-compose.yml down --volumes] (cluster.py:121, run_and_check) 2025-11-13 09:22:50.707000 [ 628 ] DEBUG : Stderr: Container roottestattachpartitionusingcopy-gw0-replica1-1 Stopping (cluster.py:147, run_and_check) 2025-11-13 09:22:50.707000 [ 628 ] DEBUG : Stderr: Container roottestattachpartitionusingcopy-gw0-replica2-1 Stopping (cluster.py:147, run_and_check) 2025-11-13 09:22:50.707000 [ 628 ] DEBUG : Stderr: Container roottestattachpartitionusingcopy-gw0-replica2-1 Stopped (cluster.py:147, run_and_check) 2025-11-13 09:22:50.707000 [ 628 ] DEBUG : Stderr: Container roottestattachpartitionusingcopy-gw0-replica2-1 Removing (cluster.py:147, run_and_check) 2025-11-13 09:22:50.707000 [ 628 ] DEBUG : Stderr: Container roottestattachpartitionusingcopy-gw0-replica1-1 Stopped (cluster.py:147, run_and_check) 2025-11-13 09:22:50.707000 [ 628 ] DEBUG : Stderr: Container roottestattachpartitionusingcopy-gw0-replica1-1 Removing (cluster.py:147, run_and_check) 2025-11-13 09:22:50.707000 [ 628 ] DEBUG : Stderr: Container roottestattachpartitionusingcopy-gw0-replica2-1 Removed (cluster.py:147, run_and_check) 2025-11-13 09:22:50.707000 [ 628 ] DEBUG : Stderr: Container roottestattachpartitionusingcopy-gw0-replica1-1 Removed (cluster.py:147, run_and_check) 2025-11-13 09:22:50.707000 [ 628 ] DEBUG : Stderr: Container roottestattachpartitionusingcopy-gw0-zoo3-1 Stopping (cluster.py:147, run_and_check) 2025-11-13 09:22:50.707000 [ 628 ] DEBUG : Stderr: Container roottestattachpartitionusingcopy-gw0-zoo1-1 Stopping (cluster.py:147, run_and_check) 2025-11-13 09:22:50.708000 [ 628 ] DEBUG : Stderr: Container roottestattachpartitionusingcopy-gw0-zoo2-1 Stopping (cluster.py:147, run_and_check) 2025-11-13 09:22:50.708000 [ 628 ] DEBUG : Stderr: Container roottestattachpartitionusingcopy-gw0-zoo3-1 Stopped (cluster.py:147, run_and_check) 2025-11-13 09:22:50.708000 [ 628 ] DEBUG : Stderr: Container roottestattachpartitionusingcopy-gw0-zoo3-1 Removing (cluster.py:147, run_and_check) 2025-11-13 09:22:50.708000 [ 628 ] DEBUG : Stderr: Container roottestattachpartitionusingcopy-gw0-zoo2-1 Stopped (cluster.py:147, run_and_check) 2025-11-13 09:22:50.708000 [ 628 ] DEBUG : Stderr: Container roottestattachpartitionusingcopy-gw0-zoo2-1 Removing (cluster.py:147, run_and_check) 2025-11-13 09:22:50.708000 [ 628 ] DEBUG : Stderr: Container roottestattachpartitionusingcopy-gw0-zoo1-1 Stopped (cluster.py:147, run_and_check) 2025-11-13 09:22:50.708000 [ 628 ] DEBUG : Stderr: Container roottestattachpartitionusingcopy-gw0-zoo1-1 Removing (cluster.py:147, run_and_check) 2025-11-13 09:22:50.708000 [ 628 ] DEBUG : Stderr: Container roottestattachpartitionusingcopy-gw0-zoo3-1 Removed (cluster.py:147, run_and_check) 2025-11-13 09:22:50.708000 [ 628 ] DEBUG : Stderr: Container roottestattachpartitionusingcopy-gw0-zoo1-1 Removed (cluster.py:147, run_and_check) 2025-11-13 09:22:50.708000 [ 628 ] DEBUG : Stderr: Container roottestattachpartitionusingcopy-gw0-zoo2-1 Removed (cluster.py:147, run_and_check) 2025-11-13 09:22:50.708000 [ 628 ] DEBUG : Stderr: Network roottestattachpartitionusingcopy-gw0_default Removing (cluster.py:147, run_and_check) 2025-11-13 09:22:50.708000 [ 628 ] DEBUG : Stderr: Network roottestattachpartitionusingcopy-gw0_default Removed (cluster.py:147, run_and_check) 2025-11-13 09:22:50.708000 [ 628 ] DEBUG : Cleanup called (cluster.py:851, cleanup) 2025-11-13 09:22:50.724000 [ 628 ] DEBUG : Docker networks for project roottestattachpartitionusingcopy-gw0 are NETWORK ID NAME DRIVER SCOPE (cluster.py:830, print_all_docker_pieces) 2025-11-13 09:22:50.741000 [ 628 ] DEBUG : Docker containers for project roottestattachpartitionusingcopy-gw0 are CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES (cluster.py:838, print_all_docker_pieces) 2025-11-13 09:22:50.758000 [ 628 ] DEBUG : Docker volumes for project roottestattachpartitionusingcopy-gw0 are DRIVER VOLUME NAME (cluster.py:846, print_all_docker_pieces) 2025-11-13 09:22:50.758000 [ 628 ] DEBUG : Command:[docker container list --all --filter name='^/roottestattachpartitionusingcopy-gw0-.*-1$' --format '{{.ID}}:{{.Names}}'] (cluster.py:121, run_and_check) 2025-11-13 09:22:50.776000 [ 628 ] DEBUG : Unstopped containers: {} (cluster.py:865, cleanup) 2025-11-13 09:22:50.776000 [ 628 ] DEBUG : No running containers for project: roottestattachpartitionusingcopy-gw0 (cluster.py:879, cleanup) 2025-11-13 09:22:50.776000 [ 628 ] DEBUG : Trying to prune unused networks... (cluster.py:885, cleanup) 2025-11-13 09:22:50.794000 [ 628 ] DEBUG : Trying to prune unused images... (cluster.py:901, cleanup) 2025-11-13 09:22:50.795000 [ 628 ] DEBUG : Command:[docker image prune -f] (cluster.py:121, run_and_check) 2025-11-13 09:22:50.821000 [ 628 ] DEBUG : Stdout:Total reclaimed space: 0B (cluster.py:145, run_and_check) 2025-11-13 09:22:50.821000 [ 628 ] DEBUG : Images pruned (cluster.py:904, cleanup) 2025-11-13 09:22:50.821000 [ 628 ] DEBUG : Trying to prune unused volumes... (cluster.py:910, cleanup) 2025-11-13 09:22:50.821000 [ 628 ] DEBUG : Command:[docker volume ls | wc -l] (cluster.py:121, run_and_check) 2025-11-13 09:22:50.841000 [ 628 ] DEBUG : Stdout:1 (cluster.py:145, run_and_check) 2025-11-13 09:22:50.841000 [ 628 ] DEBUG : Volumes pruned: 1 (cluster.py:915, cleanup) ----------------- generated report log file: parallel0_1.jsonl ----------------- ============================== slowest durations =============================== 170.58s call test_attach_partition_using_copy/test.py::test_not_work_on_different_disk 169.71s call test_cow_policy/test.py::test_cow_policy[cow_policy_multi_volume] 169.26s call test_attach_partition_using_copy/test.py::test_both_mergetree 167.11s call test_attach_partition_using_copy/test.py::test_only_destination_replicated 164.69s call test_attach_partition_using_copy/test.py::test_all_replicated 163.94s call test_cow_policy/test.py::test_cow_policy[cow_policy_multi_disk] 18.77s setup test_attach_partition_using_copy/test.py::test_all_replicated 13.69s setup test_cow_policy/test.py::test_cow_policy[cow_policy_multi_disk] 7.82s teardown test_attach_partition_using_copy/test.py::test_only_destination_replicated 6.84s teardown test_cow_policy/test.py::test_cow_policy[cow_policy_multi_volume] 0.00s teardown test_attach_partition_using_copy/test.py::test_all_replicated 0.00s teardown test_cow_policy/test.py::test_cow_policy[cow_policy_multi_disk] 0.00s setup test_cow_policy/test.py::test_cow_policy[cow_policy_multi_volume] 0.00s setup test_attach_partition_using_copy/test.py::test_both_mergetree 0.00s teardown test_attach_partition_using_copy/test.py::test_both_mergetree 0.00s teardown test_attach_partition_using_copy/test.py::test_not_work_on_different_disk 0.00s setup test_attach_partition_using_copy/test.py::test_not_work_on_different_disk 0.00s setup test_attach_partition_using_copy/test.py::test_only_destination_replicated =========================== short test summary info ============================ FAILED test_cow_policy/test.py::test_cow_policy[cow_policy_multi_disk] - Exce... FAILED test_attach_partition_using_copy/test.py::test_all_replicated - Except... FAILED test_cow_policy/test.py::test_cow_policy[cow_policy_multi_volume] - Ex... FAILED test_attach_partition_using_copy/test.py::test_both_mergetree - Except... FAILED test_attach_partition_using_copy/test.py::test_not_work_on_different_disk FAILED test_attach_partition_using_copy/test.py::test_only_destination_replicated ======================== 6 failed in 700.90s (0:11:40) ========================= Traceback (most recent call last): File "/home/ubuntu/_work/ClickHouse/ClickHouse/tests/integration/./runner", line 492, in subprocess.check_call(cmd, shell=True, bufsize=0) File "/usr/lib/python3.10/subprocess.py", line 369, in check_call raise CalledProcessError(retcode, cmd) subprocess.CalledProcessError: Command 'docker run --rm --name clickhouse_integration_tests_mmxjdk --privileged --dns-search='.' --memory=30709039104 --security-opt seccomp=unconfined --cap-add=SYS_PTRACE --volume=/home/ubuntu/_work/_temp/test/build/clickhouse:/clickhouse --volume=/home/ubuntu/_work/ClickHouse/ClickHouse/programs/server:/clickhouse-config --volume=/home/ubuntu/_work/ClickHouse/ClickHouse/tests/integration:/ClickHouse/tests/integration --volume=/home/ubuntu/_work/ClickHouse/ClickHouse/utils/backupview:/ClickHouse/utils/backupview --volume=/home/ubuntu/_work/ClickHouse/ClickHouse/utils/grpc-client/pb2:/ClickHouse/utils/grpc-client/pb2 --volume=/run:/run/host:ro --volume=clickhouse_integration_tests_volume:/var/lib/docker -e DOCKER_DOTNET_CLIENT_TAG=11de0b29a15d -e DOCKER_HELPER_TAG=5dc43a6382f0 -e DOCKER_BASE_TAG=5ccda723c1fc -e DOCKER_KERBEROS_KDC_TAG=9391ecdee8d7 -e DOCKER_MYSQL_GOLANG_CLIENT_TAG=9bec2a638e6e -e DOCKER_MYSQL_JAVA_CLIENT_TAG=766bff31cfe4 -e DOCKER_MYSQL_JS_CLIENT_TAG=41ba7c2ec2a1 -e DOCKER_MYSQL_PHP_CLIENT_TAG=88be89c1e3b6 -e DOCKER_NGINX_DAV_TAG=b55ac9cd7519 -e DOCKER_POSTGRESQL_JAVA_CLIENT_TAG=a4eff5c7f4d6 -e DOCKER_PYTHON_BOTTLE_TAG=d862517635bf -e DOCKER_CLIENT_TIMEOUT=300 -e COMPOSE_HTTP_TIMEOUT=600 -e PYTHONUNBUFFERED=1 -e PYTEST_ADDOPTS="--dist=loadfile -n 10 -rfEps --run-id=1 --color=no --durations=0 --report-log=parallel0_1.jsonl --report-log-exclude-logs-on-passed-tests test_attach_partition_using_copy/test.py::test_all_replicated test_attach_partition_using_copy/test.py::test_both_mergetree test_attach_partition_using_copy/test.py::test_not_work_on_different_disk test_attach_partition_using_copy/test.py::test_only_destination_replicated 'test_cow_policy/test.py::test_cow_policy[cow_policy_multi_disk]' 'test_cow_policy/test.py::test_cow_policy[cow_policy_multi_volume]' -vvv " altinityinfra/integration-tests-runner:226bfaf75ac1 ' returned non-zero exit status 1.